00:00:00.001 Started by upstream project "autotest-per-patch" build number 132532 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.027 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.029 The recommended git tool is: git 00:00:00.029 using credential 00000000-0000-0000-0000-000000000002 00:00:00.032 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.046 Fetching changes from the remote Git repository 00:00:00.050 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.062 Using shallow fetch with depth 1 00:00:00.062 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.062 > git --version # timeout=10 00:00:00.079 > git --version # 'git version 2.39.2' 00:00:00.079 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.102 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.102 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.693 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.707 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.719 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.719 > git config core.sparsecheckout # timeout=10 00:00:02.729 > git read-tree -mu HEAD # timeout=10 00:00:02.745 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.768 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.768 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.992 [Pipeline] Start of Pipeline 00:00:03.009 [Pipeline] library 00:00:03.011 Loading library shm_lib@master 00:00:03.011 Library shm_lib@master is cached. Copying from home. 00:00:03.028 [Pipeline] node 00:00:03.039 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.041 [Pipeline] { 00:00:03.050 [Pipeline] catchError 00:00:03.051 [Pipeline] { 00:00:03.064 [Pipeline] wrap 00:00:03.071 [Pipeline] { 00:00:03.080 [Pipeline] stage 00:00:03.082 [Pipeline] { (Prologue) 00:00:03.101 [Pipeline] echo 00:00:03.102 Node: VM-host-WFP7 00:00:03.110 [Pipeline] cleanWs 00:00:03.120 [WS-CLEANUP] Deleting project workspace... 00:00:03.120 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.127 [WS-CLEANUP] done 00:00:03.346 [Pipeline] setCustomBuildProperty 00:00:03.459 [Pipeline] httpRequest 00:00:04.078 [Pipeline] echo 00:00:04.080 Sorcerer 10.211.164.101 is alive 00:00:04.088 [Pipeline] retry 00:00:04.090 [Pipeline] { 00:00:04.102 [Pipeline] httpRequest 00:00:04.107 HttpMethod: GET 00:00:04.107 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.108 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.109 Response Code: HTTP/1.1 200 OK 00:00:04.109 Success: Status code 200 is in the accepted range: 200,404 00:00:04.110 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.255 [Pipeline] } 00:00:04.272 [Pipeline] // retry 00:00:04.279 [Pipeline] sh 00:00:04.556 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.566 [Pipeline] httpRequest 00:00:04.950 [Pipeline] echo 00:00:04.952 Sorcerer 10.211.164.101 is alive 00:00:04.959 [Pipeline] retry 00:00:04.961 [Pipeline] { 00:00:04.973 [Pipeline] httpRequest 00:00:04.977 HttpMethod: GET 00:00:04.977 URL: http://10.211.164.101/packages/spdk_9f3071c5f7cedc5f53a2d02f16d2811f7e215671.tar.gz 00:00:04.978 Sending request to url: http://10.211.164.101/packages/spdk_9f3071c5f7cedc5f53a2d02f16d2811f7e215671.tar.gz 00:00:04.979 Response Code: HTTP/1.1 200 OK 00:00:04.979 Success: Status code 200 is in the accepted range: 200,404 00:00:04.980 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_9f3071c5f7cedc5f53a2d02f16d2811f7e215671.tar.gz 00:00:23.131 [Pipeline] } 00:00:23.149 [Pipeline] // retry 00:00:23.157 [Pipeline] sh 00:00:23.441 + tar --no-same-owner -xf spdk_9f3071c5f7cedc5f53a2d02f16d2811f7e215671.tar.gz 00:00:26.020 [Pipeline] sh 00:00:26.302 + git -C spdk log --oneline -n5 00:00:26.302 9f3071c5f nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:00:26.302 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:00:26.302 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:00:26.302 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:00:26.302 e93f0f941 bdev/malloc: Support accel sequence when DIF is enabled 00:00:26.324 [Pipeline] writeFile 00:00:26.341 [Pipeline] sh 00:00:26.624 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:26.635 [Pipeline] sh 00:00:26.919 + cat autorun-spdk.conf 00:00:26.919 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.919 SPDK_RUN_ASAN=1 00:00:26.919 SPDK_RUN_UBSAN=1 00:00:26.919 SPDK_TEST_RAID=1 00:00:26.919 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:26.926 RUN_NIGHTLY=0 00:00:26.930 [Pipeline] } 00:00:26.954 [Pipeline] // stage 00:00:26.978 [Pipeline] stage 00:00:26.983 [Pipeline] { (Run VM) 00:00:27.001 [Pipeline] sh 00:00:27.283 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:27.283 + echo 'Start stage prepare_nvme.sh' 00:00:27.283 Start stage prepare_nvme.sh 00:00:27.283 + [[ -n 2 ]] 00:00:27.283 + disk_prefix=ex2 00:00:27.283 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:27.283 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:27.283 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:27.283 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:27.283 ++ SPDK_RUN_ASAN=1 00:00:27.283 ++ SPDK_RUN_UBSAN=1 00:00:27.283 ++ SPDK_TEST_RAID=1 00:00:27.283 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:27.283 ++ RUN_NIGHTLY=0 00:00:27.283 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:27.283 + nvme_files=() 00:00:27.283 + declare -A nvme_files 00:00:27.283 + backend_dir=/var/lib/libvirt/images/backends 00:00:27.283 + nvme_files['nvme.img']=5G 00:00:27.283 + nvme_files['nvme-cmb.img']=5G 00:00:27.283 + nvme_files['nvme-multi0.img']=4G 00:00:27.283 + nvme_files['nvme-multi1.img']=4G 00:00:27.283 + nvme_files['nvme-multi2.img']=4G 00:00:27.283 + nvme_files['nvme-openstack.img']=8G 00:00:27.283 + nvme_files['nvme-zns.img']=5G 00:00:27.283 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:27.283 + (( SPDK_TEST_FTL == 1 )) 00:00:27.283 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:27.283 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:27.283 + for nvme in "${!nvme_files[@]}" 00:00:27.283 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:27.283 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:27.283 + for nvme in "${!nvme_files[@]}" 00:00:27.283 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:27.283 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:27.283 + for nvme in "${!nvme_files[@]}" 00:00:27.283 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:27.283 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:27.283 + for nvme in "${!nvme_files[@]}" 00:00:27.283 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:27.283 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:27.283 + for nvme in "${!nvme_files[@]}" 00:00:27.283 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:27.283 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:27.283 + for nvme in "${!nvme_files[@]}" 00:00:27.283 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:27.541 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:27.541 + for nvme in "${!nvme_files[@]}" 00:00:27.541 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:28.109 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:28.109 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:28.109 + echo 'End stage prepare_nvme.sh' 00:00:28.109 End stage prepare_nvme.sh 00:00:28.120 [Pipeline] sh 00:00:28.397 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:28.398 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:00:28.398 00:00:28.398 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:28.398 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:28.398 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:28.398 HELP=0 00:00:28.398 DRY_RUN=0 00:00:28.398 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:28.398 NVME_DISKS_TYPE=nvme,nvme, 00:00:28.398 NVME_AUTO_CREATE=0 00:00:28.398 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:28.398 NVME_CMB=,, 00:00:28.398 NVME_PMR=,, 00:00:28.398 NVME_ZNS=,, 00:00:28.398 NVME_MS=,, 00:00:28.398 NVME_FDP=,, 00:00:28.398 SPDK_VAGRANT_DISTRO=fedora39 00:00:28.398 SPDK_VAGRANT_VMCPU=10 00:00:28.398 SPDK_VAGRANT_VMRAM=12288 00:00:28.398 SPDK_VAGRANT_PROVIDER=libvirt 00:00:28.398 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:28.398 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:28.398 SPDK_OPENSTACK_NETWORK=0 00:00:28.398 VAGRANT_PACKAGE_BOX=0 00:00:28.398 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:28.398 FORCE_DISTRO=true 00:00:28.398 VAGRANT_BOX_VERSION= 00:00:28.398 EXTRA_VAGRANTFILES= 00:00:28.398 NIC_MODEL=virtio 00:00:28.398 00:00:28.398 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:28.398 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:30.938 Bringing machine 'default' up with 'libvirt' provider... 00:00:31.197 ==> default: Creating image (snapshot of base box volume). 00:00:31.198 ==> default: Creating domain with the following settings... 00:00:31.198 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732643112_8483429624604f96712f 00:00:31.198 ==> default: -- Domain type: kvm 00:00:31.198 ==> default: -- Cpus: 10 00:00:31.198 ==> default: -- Feature: acpi 00:00:31.198 ==> default: -- Feature: apic 00:00:31.198 ==> default: -- Feature: pae 00:00:31.198 ==> default: -- Memory: 12288M 00:00:31.198 ==> default: -- Memory Backing: hugepages: 00:00:31.198 ==> default: -- Management MAC: 00:00:31.198 ==> default: -- Loader: 00:00:31.198 ==> default: -- Nvram: 00:00:31.198 ==> default: -- Base box: spdk/fedora39 00:00:31.198 ==> default: -- Storage pool: default 00:00:31.198 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732643112_8483429624604f96712f.img (20G) 00:00:31.198 ==> default: -- Volume Cache: default 00:00:31.198 ==> default: -- Kernel: 00:00:31.198 ==> default: -- Initrd: 00:00:31.198 ==> default: -- Graphics Type: vnc 00:00:31.198 ==> default: -- Graphics Port: -1 00:00:31.198 ==> default: -- Graphics IP: 127.0.0.1 00:00:31.198 ==> default: -- Graphics Password: Not defined 00:00:31.198 ==> default: -- Video Type: cirrus 00:00:31.198 ==> default: -- Video VRAM: 9216 00:00:31.198 ==> default: -- Sound Type: 00:00:31.198 ==> default: -- Keymap: en-us 00:00:31.198 ==> default: -- TPM Path: 00:00:31.198 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:31.198 ==> default: -- Command line args: 00:00:31.198 ==> default: -> value=-device, 00:00:31.198 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:31.198 ==> default: -> value=-drive, 00:00:31.198 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:31.198 ==> default: -> value=-device, 00:00:31.198 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:31.198 ==> default: -> value=-device, 00:00:31.198 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:31.198 ==> default: -> value=-drive, 00:00:31.198 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:31.198 ==> default: -> value=-device, 00:00:31.198 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:31.198 ==> default: -> value=-drive, 00:00:31.198 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:31.198 ==> default: -> value=-device, 00:00:31.198 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:31.198 ==> default: -> value=-drive, 00:00:31.198 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:31.198 ==> default: -> value=-device, 00:00:31.198 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:31.457 ==> default: Creating shared folders metadata... 00:00:31.457 ==> default: Starting domain. 00:00:32.836 ==> default: Waiting for domain to get an IP address... 00:00:50.930 ==> default: Waiting for SSH to become available... 00:00:50.930 ==> default: Configuring and enabling network interfaces... 00:00:56.209 default: SSH address: 192.168.121.15:22 00:00:56.209 default: SSH username: vagrant 00:00:56.209 default: SSH auth method: private key 00:00:58.755 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:06.879 ==> default: Mounting SSHFS shared folder... 00:01:09.419 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:09.419 ==> default: Checking Mount.. 00:01:11.328 ==> default: Folder Successfully Mounted! 00:01:11.328 ==> default: Running provisioner: file... 00:01:12.267 default: ~/.gitconfig => .gitconfig 00:01:12.835 00:01:12.835 SUCCESS! 00:01:12.835 00:01:12.835 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:12.835 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:12.835 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:12.835 00:01:12.844 [Pipeline] } 00:01:12.858 [Pipeline] // stage 00:01:12.866 [Pipeline] dir 00:01:12.866 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:12.868 [Pipeline] { 00:01:12.879 [Pipeline] catchError 00:01:12.881 [Pipeline] { 00:01:12.892 [Pipeline] sh 00:01:13.173 + vagrant ssh-config --host vagrant 00:01:13.173 + sed -ne /^Host/,$p 00:01:13.173 + tee ssh_conf 00:01:16.472 Host vagrant 00:01:16.472 HostName 192.168.121.15 00:01:16.472 User vagrant 00:01:16.472 Port 22 00:01:16.472 UserKnownHostsFile /dev/null 00:01:16.472 StrictHostKeyChecking no 00:01:16.472 PasswordAuthentication no 00:01:16.472 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:16.472 IdentitiesOnly yes 00:01:16.472 LogLevel FATAL 00:01:16.472 ForwardAgent yes 00:01:16.472 ForwardX11 yes 00:01:16.472 00:01:16.488 [Pipeline] withEnv 00:01:16.490 [Pipeline] { 00:01:16.504 [Pipeline] sh 00:01:16.787 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:16.787 source /etc/os-release 00:01:16.787 [[ -e /image.version ]] && img=$(< /image.version) 00:01:16.787 # Minimal, systemd-like check. 00:01:16.787 if [[ -e /.dockerenv ]]; then 00:01:16.787 # Clear garbage from the node's name: 00:01:16.787 # agt-er_autotest_547-896 -> autotest_547-896 00:01:16.787 # $HOSTNAME is the actual container id 00:01:16.787 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:16.787 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:16.787 # We can assume this is a mount from a host where container is running, 00:01:16.787 # so fetch its hostname to easily identify the target swarm worker. 00:01:16.787 container="$(< /etc/hostname) ($agent)" 00:01:16.787 else 00:01:16.787 # Fallback 00:01:16.787 container=$agent 00:01:16.787 fi 00:01:16.787 fi 00:01:16.787 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:16.787 00:01:17.057 [Pipeline] } 00:01:17.069 [Pipeline] // withEnv 00:01:17.078 [Pipeline] setCustomBuildProperty 00:01:17.091 [Pipeline] stage 00:01:17.093 [Pipeline] { (Tests) 00:01:17.104 [Pipeline] sh 00:01:17.382 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:17.655 [Pipeline] sh 00:01:17.933 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:18.205 [Pipeline] timeout 00:01:18.206 Timeout set to expire in 1 hr 30 min 00:01:18.207 [Pipeline] { 00:01:18.220 [Pipeline] sh 00:01:18.502 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:19.071 HEAD is now at 9f3071c5f nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:19.084 [Pipeline] sh 00:01:19.367 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:19.639 [Pipeline] sh 00:01:19.918 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:20.223 [Pipeline] sh 00:01:20.506 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:20.766 ++ readlink -f spdk_repo 00:01:20.766 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:20.766 + [[ -n /home/vagrant/spdk_repo ]] 00:01:20.766 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:20.766 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:20.766 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:20.766 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:20.766 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:20.766 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:20.766 + cd /home/vagrant/spdk_repo 00:01:20.766 + source /etc/os-release 00:01:20.766 ++ NAME='Fedora Linux' 00:01:20.766 ++ VERSION='39 (Cloud Edition)' 00:01:20.766 ++ ID=fedora 00:01:20.766 ++ VERSION_ID=39 00:01:20.766 ++ VERSION_CODENAME= 00:01:20.766 ++ PLATFORM_ID=platform:f39 00:01:20.766 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:20.766 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.766 ++ LOGO=fedora-logo-icon 00:01:20.766 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:20.766 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.766 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:20.766 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.766 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.766 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.766 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:20.766 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.766 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:20.766 ++ SUPPORT_END=2024-11-12 00:01:20.766 ++ VARIANT='Cloud Edition' 00:01:20.766 ++ VARIANT_ID=cloud 00:01:20.766 + uname -a 00:01:20.766 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:20.766 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:21.336 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:21.336 Hugepages 00:01:21.336 node hugesize free / total 00:01:21.336 node0 1048576kB 0 / 0 00:01:21.336 node0 2048kB 0 / 0 00:01:21.336 00:01:21.336 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:21.336 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:21.336 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:21.336 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:21.336 + rm -f /tmp/spdk-ld-path 00:01:21.336 + source autorun-spdk.conf 00:01:21.336 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.336 ++ SPDK_RUN_ASAN=1 00:01:21.336 ++ SPDK_RUN_UBSAN=1 00:01:21.336 ++ SPDK_TEST_RAID=1 00:01:21.336 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.336 ++ RUN_NIGHTLY=0 00:01:21.336 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:21.336 + [[ -n '' ]] 00:01:21.336 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:21.336 + for M in /var/spdk/build-*-manifest.txt 00:01:21.336 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:21.336 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.336 + for M in /var/spdk/build-*-manifest.txt 00:01:21.336 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:21.336 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.336 + for M in /var/spdk/build-*-manifest.txt 00:01:21.336 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:21.336 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.336 ++ uname 00:01:21.336 + [[ Linux == \L\i\n\u\x ]] 00:01:21.336 + sudo dmesg -T 00:01:21.597 + sudo dmesg --clear 00:01:21.597 + dmesg_pid=5423 00:01:21.597 + [[ Fedora Linux == FreeBSD ]] 00:01:21.597 + sudo dmesg -Tw 00:01:21.597 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.597 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.597 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:21.597 + [[ -x /usr/src/fio-static/fio ]] 00:01:21.597 + export FIO_BIN=/usr/src/fio-static/fio 00:01:21.597 + FIO_BIN=/usr/src/fio-static/fio 00:01:21.597 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:21.597 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:21.597 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:21.597 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.597 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.597 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:21.597 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.597 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.597 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:21.597 17:46:03 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:21.597 17:46:03 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:21.597 17:46:03 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.597 17:46:03 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:21.597 17:46:03 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:21.597 17:46:03 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:21.597 17:46:03 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.597 17:46:03 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:21.597 17:46:03 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:21.597 17:46:03 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:21.597 17:46:03 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:21.597 17:46:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:21.597 17:46:03 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:21.597 17:46:03 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:21.597 17:46:03 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:21.597 17:46:03 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:21.597 17:46:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.597 17:46:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.597 17:46:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.597 17:46:03 -- paths/export.sh@5 -- $ export PATH 00:01:21.597 17:46:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.857 17:46:03 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:21.857 17:46:03 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:21.857 17:46:03 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732643163.XXXXXX 00:01:21.857 17:46:03 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732643163.2vcZfq 00:01:21.857 17:46:03 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:21.857 17:46:03 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:21.857 17:46:03 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:21.857 17:46:03 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:21.857 17:46:03 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:21.857 17:46:03 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:21.857 17:46:03 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:21.857 17:46:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.857 17:46:03 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:21.857 17:46:03 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:21.857 17:46:03 -- pm/common@17 -- $ local monitor 00:01:21.857 17:46:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.857 17:46:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.857 17:46:03 -- pm/common@25 -- $ sleep 1 00:01:21.857 17:46:03 -- pm/common@21 -- $ date +%s 00:01:21.857 17:46:03 -- pm/common@21 -- $ date +%s 00:01:21.857 17:46:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732643163 00:01:21.857 17:46:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732643163 00:01:21.857 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732643163_collect-vmstat.pm.log 00:01:21.857 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732643163_collect-cpu-load.pm.log 00:01:22.797 17:46:04 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:22.797 17:46:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:22.797 17:46:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:22.797 17:46:04 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:22.797 17:46:04 -- spdk/autobuild.sh@16 -- $ date -u 00:01:22.797 Tue Nov 26 05:46:04 PM UTC 2024 00:01:22.797 17:46:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:22.797 v25.01-pre-270-g9f3071c5f 00:01:22.797 17:46:04 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:22.797 17:46:04 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:22.797 17:46:04 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:22.797 17:46:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:22.797 17:46:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.797 ************************************ 00:01:22.797 START TEST asan 00:01:22.797 ************************************ 00:01:22.797 using asan 00:01:22.797 17:46:04 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:22.797 00:01:22.797 real 0m0.001s 00:01:22.797 user 0m0.000s 00:01:22.797 sys 0m0.000s 00:01:22.797 17:46:04 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:22.797 17:46:04 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:22.797 ************************************ 00:01:22.797 END TEST asan 00:01:22.797 ************************************ 00:01:22.797 17:46:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:22.797 17:46:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:22.797 17:46:04 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:22.797 17:46:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:22.797 17:46:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.797 ************************************ 00:01:22.797 START TEST ubsan 00:01:22.797 ************************************ 00:01:22.797 using ubsan 00:01:22.797 17:46:04 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:22.797 00:01:22.797 real 0m0.000s 00:01:22.797 user 0m0.000s 00:01:22.797 sys 0m0.000s 00:01:22.797 17:46:04 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:22.797 17:46:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:22.797 ************************************ 00:01:22.797 END TEST ubsan 00:01:22.797 ************************************ 00:01:22.797 17:46:04 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:22.797 17:46:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:22.797 17:46:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:22.797 17:46:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:22.797 17:46:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:22.797 17:46:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:22.797 17:46:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:22.797 17:46:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:22.797 17:46:04 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:23.056 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:23.056 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:23.624 Using 'verbs' RDMA provider 00:01:39.462 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:57.559 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:57.559 Creating mk/config.mk...done. 00:01:57.559 Creating mk/cc.flags.mk...done. 00:01:57.559 Type 'make' to build. 00:01:57.559 17:46:37 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:57.559 17:46:37 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:57.559 17:46:37 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:57.559 17:46:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.559 ************************************ 00:01:57.559 START TEST make 00:01:57.559 ************************************ 00:01:57.559 17:46:37 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:57.559 make[1]: Nothing to be done for 'all'. 00:02:07.555 The Meson build system 00:02:07.555 Version: 1.5.0 00:02:07.555 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:07.555 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:07.555 Build type: native build 00:02:07.555 Program cat found: YES (/usr/bin/cat) 00:02:07.555 Project name: DPDK 00:02:07.555 Project version: 24.03.0 00:02:07.555 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:07.555 C linker for the host machine: cc ld.bfd 2.40-14 00:02:07.555 Host machine cpu family: x86_64 00:02:07.555 Host machine cpu: x86_64 00:02:07.555 Message: ## Building in Developer Mode ## 00:02:07.555 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:07.555 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:07.555 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:07.555 Program python3 found: YES (/usr/bin/python3) 00:02:07.555 Program cat found: YES (/usr/bin/cat) 00:02:07.555 Compiler for C supports arguments -march=native: YES 00:02:07.555 Checking for size of "void *" : 8 00:02:07.555 Checking for size of "void *" : 8 (cached) 00:02:07.555 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:07.555 Library m found: YES 00:02:07.555 Library numa found: YES 00:02:07.555 Has header "numaif.h" : YES 00:02:07.555 Library fdt found: NO 00:02:07.555 Library execinfo found: NO 00:02:07.555 Has header "execinfo.h" : YES 00:02:07.555 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:07.555 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:07.555 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:07.555 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:07.555 Run-time dependency openssl found: YES 3.1.1 00:02:07.555 Run-time dependency libpcap found: YES 1.10.4 00:02:07.555 Has header "pcap.h" with dependency libpcap: YES 00:02:07.555 Compiler for C supports arguments -Wcast-qual: YES 00:02:07.555 Compiler for C supports arguments -Wdeprecated: YES 00:02:07.555 Compiler for C supports arguments -Wformat: YES 00:02:07.555 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:07.555 Compiler for C supports arguments -Wformat-security: NO 00:02:07.555 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:07.555 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:07.555 Compiler for C supports arguments -Wnested-externs: YES 00:02:07.555 Compiler for C supports arguments -Wold-style-definition: YES 00:02:07.555 Compiler for C supports arguments -Wpointer-arith: YES 00:02:07.555 Compiler for C supports arguments -Wsign-compare: YES 00:02:07.555 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:07.555 Compiler for C supports arguments -Wundef: YES 00:02:07.555 Compiler for C supports arguments -Wwrite-strings: YES 00:02:07.555 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:07.555 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:07.555 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:07.555 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:07.555 Program objdump found: YES (/usr/bin/objdump) 00:02:07.555 Compiler for C supports arguments -mavx512f: YES 00:02:07.555 Checking if "AVX512 checking" compiles: YES 00:02:07.555 Fetching value of define "__SSE4_2__" : 1 00:02:07.555 Fetching value of define "__AES__" : 1 00:02:07.555 Fetching value of define "__AVX__" : 1 00:02:07.555 Fetching value of define "__AVX2__" : 1 00:02:07.555 Fetching value of define "__AVX512BW__" : 1 00:02:07.555 Fetching value of define "__AVX512CD__" : 1 00:02:07.555 Fetching value of define "__AVX512DQ__" : 1 00:02:07.555 Fetching value of define "__AVX512F__" : 1 00:02:07.555 Fetching value of define "__AVX512VL__" : 1 00:02:07.555 Fetching value of define "__PCLMUL__" : 1 00:02:07.555 Fetching value of define "__RDRND__" : 1 00:02:07.555 Fetching value of define "__RDSEED__" : 1 00:02:07.555 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:07.555 Fetching value of define "__znver1__" : (undefined) 00:02:07.555 Fetching value of define "__znver2__" : (undefined) 00:02:07.555 Fetching value of define "__znver3__" : (undefined) 00:02:07.555 Fetching value of define "__znver4__" : (undefined) 00:02:07.555 Library asan found: YES 00:02:07.555 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:07.555 Message: lib/log: Defining dependency "log" 00:02:07.555 Message: lib/kvargs: Defining dependency "kvargs" 00:02:07.555 Message: lib/telemetry: Defining dependency "telemetry" 00:02:07.555 Library rt found: YES 00:02:07.555 Checking for function "getentropy" : NO 00:02:07.555 Message: lib/eal: Defining dependency "eal" 00:02:07.555 Message: lib/ring: Defining dependency "ring" 00:02:07.555 Message: lib/rcu: Defining dependency "rcu" 00:02:07.555 Message: lib/mempool: Defining dependency "mempool" 00:02:07.555 Message: lib/mbuf: Defining dependency "mbuf" 00:02:07.555 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:07.555 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:07.555 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:07.555 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:07.555 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:07.555 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:07.555 Compiler for C supports arguments -mpclmul: YES 00:02:07.555 Compiler for C supports arguments -maes: YES 00:02:07.555 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:07.555 Compiler for C supports arguments -mavx512bw: YES 00:02:07.555 Compiler for C supports arguments -mavx512dq: YES 00:02:07.555 Compiler for C supports arguments -mavx512vl: YES 00:02:07.555 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:07.555 Compiler for C supports arguments -mavx2: YES 00:02:07.555 Compiler for C supports arguments -mavx: YES 00:02:07.555 Message: lib/net: Defining dependency "net" 00:02:07.555 Message: lib/meter: Defining dependency "meter" 00:02:07.555 Message: lib/ethdev: Defining dependency "ethdev" 00:02:07.555 Message: lib/pci: Defining dependency "pci" 00:02:07.555 Message: lib/cmdline: Defining dependency "cmdline" 00:02:07.555 Message: lib/hash: Defining dependency "hash" 00:02:07.555 Message: lib/timer: Defining dependency "timer" 00:02:07.555 Message: lib/compressdev: Defining dependency "compressdev" 00:02:07.555 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:07.555 Message: lib/dmadev: Defining dependency "dmadev" 00:02:07.555 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:07.555 Message: lib/power: Defining dependency "power" 00:02:07.555 Message: lib/reorder: Defining dependency "reorder" 00:02:07.555 Message: lib/security: Defining dependency "security" 00:02:07.555 Has header "linux/userfaultfd.h" : YES 00:02:07.555 Has header "linux/vduse.h" : YES 00:02:07.555 Message: lib/vhost: Defining dependency "vhost" 00:02:07.555 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:07.555 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:07.555 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:07.555 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:07.555 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:07.555 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:07.555 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:07.555 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:07.555 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:07.555 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:07.555 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:07.555 Configuring doxy-api-html.conf using configuration 00:02:07.555 Configuring doxy-api-man.conf using configuration 00:02:07.555 Program mandb found: YES (/usr/bin/mandb) 00:02:07.555 Program sphinx-build found: NO 00:02:07.555 Configuring rte_build_config.h using configuration 00:02:07.555 Message: 00:02:07.555 ================= 00:02:07.555 Applications Enabled 00:02:07.555 ================= 00:02:07.555 00:02:07.555 apps: 00:02:07.555 00:02:07.555 00:02:07.555 Message: 00:02:07.555 ================= 00:02:07.555 Libraries Enabled 00:02:07.555 ================= 00:02:07.555 00:02:07.555 libs: 00:02:07.555 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:07.555 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:07.555 cryptodev, dmadev, power, reorder, security, vhost, 00:02:07.555 00:02:07.555 Message: 00:02:07.555 =============== 00:02:07.555 Drivers Enabled 00:02:07.555 =============== 00:02:07.555 00:02:07.555 common: 00:02:07.555 00:02:07.555 bus: 00:02:07.555 pci, vdev, 00:02:07.555 mempool: 00:02:07.555 ring, 00:02:07.555 dma: 00:02:07.555 00:02:07.555 net: 00:02:07.555 00:02:07.555 crypto: 00:02:07.555 00:02:07.555 compress: 00:02:07.555 00:02:07.555 vdpa: 00:02:07.555 00:02:07.555 00:02:07.555 Message: 00:02:07.555 ================= 00:02:07.555 Content Skipped 00:02:07.555 ================= 00:02:07.555 00:02:07.555 apps: 00:02:07.555 dumpcap: explicitly disabled via build config 00:02:07.555 graph: explicitly disabled via build config 00:02:07.555 pdump: explicitly disabled via build config 00:02:07.555 proc-info: explicitly disabled via build config 00:02:07.555 test-acl: explicitly disabled via build config 00:02:07.555 test-bbdev: explicitly disabled via build config 00:02:07.555 test-cmdline: explicitly disabled via build config 00:02:07.555 test-compress-perf: explicitly disabled via build config 00:02:07.555 test-crypto-perf: explicitly disabled via build config 00:02:07.555 test-dma-perf: explicitly disabled via build config 00:02:07.555 test-eventdev: explicitly disabled via build config 00:02:07.555 test-fib: explicitly disabled via build config 00:02:07.555 test-flow-perf: explicitly disabled via build config 00:02:07.555 test-gpudev: explicitly disabled via build config 00:02:07.555 test-mldev: explicitly disabled via build config 00:02:07.555 test-pipeline: explicitly disabled via build config 00:02:07.555 test-pmd: explicitly disabled via build config 00:02:07.555 test-regex: explicitly disabled via build config 00:02:07.555 test-sad: explicitly disabled via build config 00:02:07.555 test-security-perf: explicitly disabled via build config 00:02:07.555 00:02:07.555 libs: 00:02:07.555 argparse: explicitly disabled via build config 00:02:07.555 metrics: explicitly disabled via build config 00:02:07.555 acl: explicitly disabled via build config 00:02:07.555 bbdev: explicitly disabled via build config 00:02:07.555 bitratestats: explicitly disabled via build config 00:02:07.555 bpf: explicitly disabled via build config 00:02:07.555 cfgfile: explicitly disabled via build config 00:02:07.555 distributor: explicitly disabled via build config 00:02:07.555 efd: explicitly disabled via build config 00:02:07.555 eventdev: explicitly disabled via build config 00:02:07.555 dispatcher: explicitly disabled via build config 00:02:07.555 gpudev: explicitly disabled via build config 00:02:07.555 gro: explicitly disabled via build config 00:02:07.555 gso: explicitly disabled via build config 00:02:07.555 ip_frag: explicitly disabled via build config 00:02:07.555 jobstats: explicitly disabled via build config 00:02:07.555 latencystats: explicitly disabled via build config 00:02:07.555 lpm: explicitly disabled via build config 00:02:07.555 member: explicitly disabled via build config 00:02:07.555 pcapng: explicitly disabled via build config 00:02:07.555 rawdev: explicitly disabled via build config 00:02:07.555 regexdev: explicitly disabled via build config 00:02:07.555 mldev: explicitly disabled via build config 00:02:07.555 rib: explicitly disabled via build config 00:02:07.555 sched: explicitly disabled via build config 00:02:07.555 stack: explicitly disabled via build config 00:02:07.555 ipsec: explicitly disabled via build config 00:02:07.555 pdcp: explicitly disabled via build config 00:02:07.555 fib: explicitly disabled via build config 00:02:07.555 port: explicitly disabled via build config 00:02:07.555 pdump: explicitly disabled via build config 00:02:07.555 table: explicitly disabled via build config 00:02:07.555 pipeline: explicitly disabled via build config 00:02:07.555 graph: explicitly disabled via build config 00:02:07.555 node: explicitly disabled via build config 00:02:07.555 00:02:07.555 drivers: 00:02:07.555 common/cpt: not in enabled drivers build config 00:02:07.555 common/dpaax: not in enabled drivers build config 00:02:07.555 common/iavf: not in enabled drivers build config 00:02:07.555 common/idpf: not in enabled drivers build config 00:02:07.555 common/ionic: not in enabled drivers build config 00:02:07.555 common/mvep: not in enabled drivers build config 00:02:07.555 common/octeontx: not in enabled drivers build config 00:02:07.555 bus/auxiliary: not in enabled drivers build config 00:02:07.555 bus/cdx: not in enabled drivers build config 00:02:07.555 bus/dpaa: not in enabled drivers build config 00:02:07.555 bus/fslmc: not in enabled drivers build config 00:02:07.555 bus/ifpga: not in enabled drivers build config 00:02:07.555 bus/platform: not in enabled drivers build config 00:02:07.555 bus/uacce: not in enabled drivers build config 00:02:07.555 bus/vmbus: not in enabled drivers build config 00:02:07.555 common/cnxk: not in enabled drivers build config 00:02:07.555 common/mlx5: not in enabled drivers build config 00:02:07.555 common/nfp: not in enabled drivers build config 00:02:07.555 common/nitrox: not in enabled drivers build config 00:02:07.555 common/qat: not in enabled drivers build config 00:02:07.555 common/sfc_efx: not in enabled drivers build config 00:02:07.555 mempool/bucket: not in enabled drivers build config 00:02:07.555 mempool/cnxk: not in enabled drivers build config 00:02:07.555 mempool/dpaa: not in enabled drivers build config 00:02:07.555 mempool/dpaa2: not in enabled drivers build config 00:02:07.555 mempool/octeontx: not in enabled drivers build config 00:02:07.555 mempool/stack: not in enabled drivers build config 00:02:07.555 dma/cnxk: not in enabled drivers build config 00:02:07.555 dma/dpaa: not in enabled drivers build config 00:02:07.555 dma/dpaa2: not in enabled drivers build config 00:02:07.556 dma/hisilicon: not in enabled drivers build config 00:02:07.556 dma/idxd: not in enabled drivers build config 00:02:07.556 dma/ioat: not in enabled drivers build config 00:02:07.556 dma/skeleton: not in enabled drivers build config 00:02:07.556 net/af_packet: not in enabled drivers build config 00:02:07.556 net/af_xdp: not in enabled drivers build config 00:02:07.556 net/ark: not in enabled drivers build config 00:02:07.556 net/atlantic: not in enabled drivers build config 00:02:07.556 net/avp: not in enabled drivers build config 00:02:07.556 net/axgbe: not in enabled drivers build config 00:02:07.556 net/bnx2x: not in enabled drivers build config 00:02:07.556 net/bnxt: not in enabled drivers build config 00:02:07.556 net/bonding: not in enabled drivers build config 00:02:07.556 net/cnxk: not in enabled drivers build config 00:02:07.556 net/cpfl: not in enabled drivers build config 00:02:07.556 net/cxgbe: not in enabled drivers build config 00:02:07.556 net/dpaa: not in enabled drivers build config 00:02:07.556 net/dpaa2: not in enabled drivers build config 00:02:07.556 net/e1000: not in enabled drivers build config 00:02:07.556 net/ena: not in enabled drivers build config 00:02:07.556 net/enetc: not in enabled drivers build config 00:02:07.556 net/enetfec: not in enabled drivers build config 00:02:07.556 net/enic: not in enabled drivers build config 00:02:07.556 net/failsafe: not in enabled drivers build config 00:02:07.556 net/fm10k: not in enabled drivers build config 00:02:07.556 net/gve: not in enabled drivers build config 00:02:07.556 net/hinic: not in enabled drivers build config 00:02:07.556 net/hns3: not in enabled drivers build config 00:02:07.556 net/i40e: not in enabled drivers build config 00:02:07.556 net/iavf: not in enabled drivers build config 00:02:07.556 net/ice: not in enabled drivers build config 00:02:07.556 net/idpf: not in enabled drivers build config 00:02:07.556 net/igc: not in enabled drivers build config 00:02:07.556 net/ionic: not in enabled drivers build config 00:02:07.556 net/ipn3ke: not in enabled drivers build config 00:02:07.556 net/ixgbe: not in enabled drivers build config 00:02:07.556 net/mana: not in enabled drivers build config 00:02:07.556 net/memif: not in enabled drivers build config 00:02:07.556 net/mlx4: not in enabled drivers build config 00:02:07.556 net/mlx5: not in enabled drivers build config 00:02:07.556 net/mvneta: not in enabled drivers build config 00:02:07.556 net/mvpp2: not in enabled drivers build config 00:02:07.556 net/netvsc: not in enabled drivers build config 00:02:07.556 net/nfb: not in enabled drivers build config 00:02:07.556 net/nfp: not in enabled drivers build config 00:02:07.556 net/ngbe: not in enabled drivers build config 00:02:07.556 net/null: not in enabled drivers build config 00:02:07.556 net/octeontx: not in enabled drivers build config 00:02:07.556 net/octeon_ep: not in enabled drivers build config 00:02:07.556 net/pcap: not in enabled drivers build config 00:02:07.556 net/pfe: not in enabled drivers build config 00:02:07.556 net/qede: not in enabled drivers build config 00:02:07.556 net/ring: not in enabled drivers build config 00:02:07.556 net/sfc: not in enabled drivers build config 00:02:07.556 net/softnic: not in enabled drivers build config 00:02:07.556 net/tap: not in enabled drivers build config 00:02:07.556 net/thunderx: not in enabled drivers build config 00:02:07.556 net/txgbe: not in enabled drivers build config 00:02:07.556 net/vdev_netvsc: not in enabled drivers build config 00:02:07.556 net/vhost: not in enabled drivers build config 00:02:07.556 net/virtio: not in enabled drivers build config 00:02:07.556 net/vmxnet3: not in enabled drivers build config 00:02:07.556 raw/*: missing internal dependency, "rawdev" 00:02:07.556 crypto/armv8: not in enabled drivers build config 00:02:07.556 crypto/bcmfs: not in enabled drivers build config 00:02:07.556 crypto/caam_jr: not in enabled drivers build config 00:02:07.556 crypto/ccp: not in enabled drivers build config 00:02:07.556 crypto/cnxk: not in enabled drivers build config 00:02:07.556 crypto/dpaa_sec: not in enabled drivers build config 00:02:07.556 crypto/dpaa2_sec: not in enabled drivers build config 00:02:07.556 crypto/ipsec_mb: not in enabled drivers build config 00:02:07.556 crypto/mlx5: not in enabled drivers build config 00:02:07.556 crypto/mvsam: not in enabled drivers build config 00:02:07.556 crypto/nitrox: not in enabled drivers build config 00:02:07.556 crypto/null: not in enabled drivers build config 00:02:07.556 crypto/octeontx: not in enabled drivers build config 00:02:07.556 crypto/openssl: not in enabled drivers build config 00:02:07.556 crypto/scheduler: not in enabled drivers build config 00:02:07.556 crypto/uadk: not in enabled drivers build config 00:02:07.556 crypto/virtio: not in enabled drivers build config 00:02:07.556 compress/isal: not in enabled drivers build config 00:02:07.556 compress/mlx5: not in enabled drivers build config 00:02:07.556 compress/nitrox: not in enabled drivers build config 00:02:07.556 compress/octeontx: not in enabled drivers build config 00:02:07.556 compress/zlib: not in enabled drivers build config 00:02:07.556 regex/*: missing internal dependency, "regexdev" 00:02:07.556 ml/*: missing internal dependency, "mldev" 00:02:07.556 vdpa/ifc: not in enabled drivers build config 00:02:07.556 vdpa/mlx5: not in enabled drivers build config 00:02:07.556 vdpa/nfp: not in enabled drivers build config 00:02:07.556 vdpa/sfc: not in enabled drivers build config 00:02:07.556 event/*: missing internal dependency, "eventdev" 00:02:07.556 baseband/*: missing internal dependency, "bbdev" 00:02:07.556 gpu/*: missing internal dependency, "gpudev" 00:02:07.556 00:02:07.556 00:02:07.556 Build targets in project: 85 00:02:07.556 00:02:07.556 DPDK 24.03.0 00:02:07.556 00:02:07.556 User defined options 00:02:07.556 buildtype : debug 00:02:07.556 default_library : shared 00:02:07.556 libdir : lib 00:02:07.556 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:07.556 b_sanitize : address 00:02:07.556 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:07.556 c_link_args : 00:02:07.556 cpu_instruction_set: native 00:02:07.556 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:07.556 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:07.556 enable_docs : false 00:02:07.556 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:07.556 enable_kmods : false 00:02:07.556 max_lcores : 128 00:02:07.556 tests : false 00:02:07.556 00:02:07.556 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.815 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:07.815 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:07.815 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:07.815 [3/268] Linking static target lib/librte_kvargs.a 00:02:07.815 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:07.815 [5/268] Linking static target lib/librte_log.a 00:02:07.815 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:08.384 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:08.384 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:08.384 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:08.384 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:08.384 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:08.384 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:08.384 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.384 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:08.384 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:08.645 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:08.645 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:08.645 [18/268] Linking static target lib/librte_telemetry.a 00:02:08.905 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.905 [20/268] Linking target lib/librte_log.so.24.1 00:02:08.905 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:08.905 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.905 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.905 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:09.165 [25/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:09.165 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:09.165 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:09.165 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:09.165 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.165 [30/268] Linking target lib/librte_kvargs.so.24.1 00:02:09.165 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:09.165 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:09.424 [33/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:09.424 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.424 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:09.685 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:09.685 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.685 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.685 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:09.685 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.685 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:09.685 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:09.685 [43/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:09.685 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:09.685 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:09.945 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:09.945 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:09.945 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:10.205 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:10.205 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:10.205 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:10.205 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:10.205 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:10.464 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:10.464 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.464 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:10.464 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:10.464 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:10.464 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:10.724 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:10.724 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.010 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.010 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.010 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:11.010 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:11.010 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.270 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.270 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.270 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.270 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.270 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:11.270 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.270 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:11.530 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.530 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:11.530 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:11.530 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:11.530 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:11.790 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:11.790 [80/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:11.790 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:11.790 [82/268] Linking static target lib/librte_ring.a 00:02:11.790 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.050 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:12.050 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:12.050 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:12.050 [87/268] Linking static target lib/librte_eal.a 00:02:12.311 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:12.311 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:12.311 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:12.311 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:12.311 [92/268] Linking static target lib/librte_rcu.a 00:02:12.311 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.571 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:12.571 [95/268] Linking static target lib/librte_mempool.a 00:02:12.571 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:12.571 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:12.571 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:12.832 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:12.832 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:12.832 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.832 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:12.832 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:13.092 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.092 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:13.092 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.092 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:13.092 [108/268] Linking static target lib/librte_meter.a 00:02:13.092 [109/268] Linking static target lib/librte_net.a 00:02:13.352 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.352 [111/268] Linking static target lib/librte_mbuf.a 00:02:13.352 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:13.352 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:13.352 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:13.612 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.613 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:13.613 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.613 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.873 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:14.134 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:14.134 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:14.134 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:14.395 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.395 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:14.395 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:14.395 [126/268] Linking static target lib/librte_pci.a 00:02:14.655 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:14.655 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:14.655 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:14.655 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:14.915 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:14.915 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:14.915 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.915 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:14.915 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:14.915 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:14.915 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:14.915 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:15.175 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:15.175 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:15.175 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:15.175 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:15.175 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:15.175 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:15.175 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:15.435 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:15.435 [147/268] Linking static target lib/librte_cmdline.a 00:02:15.435 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:15.694 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:15.695 [150/268] Linking static target lib/librte_timer.a 00:02:15.695 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:15.695 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:15.954 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:15.954 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:15.954 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:16.214 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.214 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.214 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:16.214 [159/268] Linking static target lib/librte_ethdev.a 00:02:16.474 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:16.474 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:16.474 [162/268] Linking static target lib/librte_compressdev.a 00:02:16.734 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:16.734 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:16.734 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:16.734 [166/268] Linking static target lib/librte_dmadev.a 00:02:16.734 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:16.734 [168/268] Linking static target lib/librte_hash.a 00:02:16.734 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:16.995 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:16.995 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:17.255 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:17.255 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.512 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:17.512 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:17.512 [176/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.512 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:17.513 [178/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.771 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:17.771 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:17.771 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:17.771 [182/268] Linking static target lib/librte_cryptodev.a 00:02:17.771 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:18.029 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:18.029 [185/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.029 [186/268] Linking static target lib/librte_power.a 00:02:18.288 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:18.288 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:18.288 [189/268] Linking static target lib/librte_reorder.a 00:02:18.547 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:18.547 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:18.547 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:18.547 [193/268] Linking static target lib/librte_security.a 00:02:18.806 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.066 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:19.326 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.326 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.326 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:19.586 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:19.586 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:19.853 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:19.853 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:20.127 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:20.127 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:20.127 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:20.388 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.388 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:20.388 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:20.388 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:20.388 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:20.388 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:20.648 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:20.648 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:20.648 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.648 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.648 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:20.648 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.648 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.648 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:20.648 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:20.907 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:20.907 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.907 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:21.168 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:21.168 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:21.168 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:21.168 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.547 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.483 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.483 [230/268] Linking target lib/librte_eal.so.24.1 00:02:23.742 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:23.742 [232/268] Linking target lib/librte_ring.so.24.1 00:02:23.742 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:23.742 [234/268] Linking target lib/librte_meter.so.24.1 00:02:23.742 [235/268] Linking target lib/librte_timer.so.24.1 00:02:23.742 [236/268] Linking target lib/librte_pci.so.24.1 00:02:23.742 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:24.002 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.002 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.002 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.002 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.002 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:24.002 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:24.002 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.002 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:24.002 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:24.002 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:24.261 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:24.261 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:24.261 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:24.261 [251/268] Linking target lib/librte_net.so.24.1 00:02:24.261 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:24.261 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:24.261 [254/268] Linking target lib/librte_reorder.so.24.1 00:02:24.520 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:24.520 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:24.520 [257/268] Linking target lib/librte_hash.so.24.1 00:02:24.520 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:24.520 [259/268] Linking target lib/librte_security.so.24.1 00:02:24.779 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:25.349 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.610 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:25.610 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:25.610 [264/268] Linking target lib/librte_power.so.24.1 00:02:26.990 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:26.990 [266/268] Linking static target lib/librte_vhost.a 00:02:29.567 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.567 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:29.567 INFO: autodetecting backend as ninja 00:02:29.567 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:47.695 CC lib/ut_mock/mock.o 00:02:47.695 CC lib/ut/ut.o 00:02:47.695 CC lib/log/log.o 00:02:47.695 CC lib/log/log_deprecated.o 00:02:47.695 CC lib/log/log_flags.o 00:02:47.695 LIB libspdk_ut_mock.a 00:02:47.695 LIB libspdk_ut.a 00:02:47.695 LIB libspdk_log.a 00:02:47.695 SO libspdk_ut.so.2.0 00:02:47.695 SO libspdk_ut_mock.so.6.0 00:02:47.695 SO libspdk_log.so.7.1 00:02:47.695 SYMLINK libspdk_ut_mock.so 00:02:47.695 SYMLINK libspdk_ut.so 00:02:47.695 SYMLINK libspdk_log.so 00:02:47.695 CC lib/dma/dma.o 00:02:47.695 CC lib/util/bit_array.o 00:02:47.695 CC lib/util/base64.o 00:02:47.695 CC lib/util/cpuset.o 00:02:47.695 CC lib/util/crc16.o 00:02:47.695 CC lib/util/crc32.o 00:02:47.695 CXX lib/trace_parser/trace.o 00:02:47.695 CC lib/util/crc32c.o 00:02:47.695 CC lib/ioat/ioat.o 00:02:47.695 CC lib/vfio_user/host/vfio_user_pci.o 00:02:47.695 CC lib/util/crc32_ieee.o 00:02:47.695 CC lib/util/crc64.o 00:02:47.695 CC lib/vfio_user/host/vfio_user.o 00:02:47.695 CC lib/util/dif.o 00:02:47.695 LIB libspdk_dma.a 00:02:47.695 CC lib/util/fd.o 00:02:47.696 SO libspdk_dma.so.5.0 00:02:47.696 CC lib/util/fd_group.o 00:02:47.696 CC lib/util/file.o 00:02:47.696 CC lib/util/hexlify.o 00:02:47.696 SYMLINK libspdk_dma.so 00:02:47.696 CC lib/util/iov.o 00:02:47.696 LIB libspdk_ioat.a 00:02:47.955 SO libspdk_ioat.so.7.0 00:02:47.955 CC lib/util/math.o 00:02:47.955 CC lib/util/net.o 00:02:47.955 SYMLINK libspdk_ioat.so 00:02:47.955 LIB libspdk_vfio_user.a 00:02:47.955 CC lib/util/pipe.o 00:02:47.955 CC lib/util/strerror_tls.o 00:02:47.955 CC lib/util/string.o 00:02:47.955 SO libspdk_vfio_user.so.5.0 00:02:47.955 CC lib/util/uuid.o 00:02:47.955 CC lib/util/xor.o 00:02:47.955 SYMLINK libspdk_vfio_user.so 00:02:47.955 CC lib/util/zipf.o 00:02:47.955 CC lib/util/md5.o 00:02:48.539 LIB libspdk_util.a 00:02:48.539 LIB libspdk_trace_parser.a 00:02:48.539 SO libspdk_util.so.10.1 00:02:48.539 SO libspdk_trace_parser.so.6.0 00:02:48.804 SYMLINK libspdk_util.so 00:02:48.804 SYMLINK libspdk_trace_parser.so 00:02:48.804 CC lib/idxd/idxd.o 00:02:48.804 CC lib/conf/conf.o 00:02:48.804 CC lib/idxd/idxd_user.o 00:02:48.804 CC lib/idxd/idxd_kernel.o 00:02:48.804 CC lib/json/json_parse.o 00:02:48.804 CC lib/json/json_util.o 00:02:48.804 CC lib/json/json_write.o 00:02:48.804 CC lib/vmd/vmd.o 00:02:48.804 CC lib/env_dpdk/env.o 00:02:48.804 CC lib/rdma_utils/rdma_utils.o 00:02:49.064 CC lib/vmd/led.o 00:02:49.064 LIB libspdk_conf.a 00:02:49.064 CC lib/env_dpdk/memory.o 00:02:49.064 SO libspdk_conf.so.6.0 00:02:49.064 CC lib/env_dpdk/pci.o 00:02:49.064 SYMLINK libspdk_conf.so 00:02:49.064 LIB libspdk_json.a 00:02:49.064 CC lib/env_dpdk/init.o 00:02:49.064 CC lib/env_dpdk/threads.o 00:02:49.064 CC lib/env_dpdk/pci_ioat.o 00:02:49.324 SO libspdk_json.so.6.0 00:02:49.324 LIB libspdk_rdma_utils.a 00:02:49.324 SO libspdk_rdma_utils.so.1.0 00:02:49.324 SYMLINK libspdk_json.so 00:02:49.324 SYMLINK libspdk_rdma_utils.so 00:02:49.324 CC lib/env_dpdk/pci_virtio.o 00:02:49.324 CC lib/env_dpdk/pci_vmd.o 00:02:49.324 CC lib/env_dpdk/pci_idxd.o 00:02:49.324 CC lib/jsonrpc/jsonrpc_server.o 00:02:49.324 CC lib/env_dpdk/pci_event.o 00:02:49.583 CC lib/rdma_provider/common.o 00:02:49.583 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:49.583 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:49.583 CC lib/jsonrpc/jsonrpc_client.o 00:02:49.583 LIB libspdk_idxd.a 00:02:49.583 SO libspdk_idxd.so.12.1 00:02:49.583 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:49.583 LIB libspdk_vmd.a 00:02:49.583 CC lib/env_dpdk/sigbus_handler.o 00:02:49.583 SYMLINK libspdk_idxd.so 00:02:49.583 CC lib/env_dpdk/pci_dpdk.o 00:02:49.583 SO libspdk_vmd.so.6.0 00:02:49.583 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:49.583 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:49.842 SYMLINK libspdk_vmd.so 00:02:49.842 LIB libspdk_rdma_provider.a 00:02:49.842 SO libspdk_rdma_provider.so.7.0 00:02:49.842 LIB libspdk_jsonrpc.a 00:02:49.842 SYMLINK libspdk_rdma_provider.so 00:02:49.842 SO libspdk_jsonrpc.so.6.0 00:02:50.101 SYMLINK libspdk_jsonrpc.so 00:02:50.361 CC lib/rpc/rpc.o 00:02:50.621 LIB libspdk_rpc.a 00:02:50.621 SO libspdk_rpc.so.6.0 00:02:50.621 LIB libspdk_env_dpdk.a 00:02:50.621 SYMLINK libspdk_rpc.so 00:02:50.880 SO libspdk_env_dpdk.so.15.1 00:02:50.880 SYMLINK libspdk_env_dpdk.so 00:02:51.139 CC lib/keyring/keyring.o 00:02:51.139 CC lib/trace/trace.o 00:02:51.139 CC lib/trace/trace_rpc.o 00:02:51.139 CC lib/trace/trace_flags.o 00:02:51.139 CC lib/keyring/keyring_rpc.o 00:02:51.139 CC lib/notify/notify.o 00:02:51.139 CC lib/notify/notify_rpc.o 00:02:51.139 LIB libspdk_notify.a 00:02:51.399 SO libspdk_notify.so.6.0 00:02:51.399 LIB libspdk_keyring.a 00:02:51.399 LIB libspdk_trace.a 00:02:51.399 SYMLINK libspdk_notify.so 00:02:51.399 SO libspdk_keyring.so.2.0 00:02:51.399 SO libspdk_trace.so.11.0 00:02:51.399 SYMLINK libspdk_keyring.so 00:02:51.399 SYMLINK libspdk_trace.so 00:02:51.968 CC lib/sock/sock_rpc.o 00:02:51.968 CC lib/sock/sock.o 00:02:51.968 CC lib/thread/thread.o 00:02:51.968 CC lib/thread/iobuf.o 00:02:52.227 LIB libspdk_sock.a 00:02:52.486 SO libspdk_sock.so.10.0 00:02:52.486 SYMLINK libspdk_sock.so 00:02:52.744 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:52.744 CC lib/nvme/nvme_ctrlr.o 00:02:52.744 CC lib/nvme/nvme_fabric.o 00:02:52.744 CC lib/nvme/nvme_ns_cmd.o 00:02:52.744 CC lib/nvme/nvme_ns.o 00:02:52.745 CC lib/nvme/nvme_pcie_common.o 00:02:52.745 CC lib/nvme/nvme_pcie.o 00:02:52.745 CC lib/nvme/nvme.o 00:02:52.745 CC lib/nvme/nvme_qpair.o 00:02:53.682 CC lib/nvme/nvme_quirks.o 00:02:53.682 CC lib/nvme/nvme_transport.o 00:02:53.682 LIB libspdk_thread.a 00:02:53.682 CC lib/nvme/nvme_discovery.o 00:02:53.682 SO libspdk_thread.so.11.0 00:02:53.682 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:53.682 SYMLINK libspdk_thread.so 00:02:53.682 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:53.682 CC lib/nvme/nvme_tcp.o 00:02:53.941 CC lib/nvme/nvme_opal.o 00:02:53.941 CC lib/nvme/nvme_io_msg.o 00:02:53.941 CC lib/nvme/nvme_poll_group.o 00:02:54.200 CC lib/nvme/nvme_zns.o 00:02:54.200 CC lib/accel/accel.o 00:02:54.200 CC lib/blob/blobstore.o 00:02:54.459 CC lib/blob/request.o 00:02:54.459 CC lib/blob/zeroes.o 00:02:54.459 CC lib/accel/accel_rpc.o 00:02:54.459 CC lib/init/json_config.o 00:02:54.718 CC lib/init/subsystem.o 00:02:54.718 CC lib/accel/accel_sw.o 00:02:54.718 CC lib/blob/blob_bs_dev.o 00:02:54.718 CC lib/nvme/nvme_stubs.o 00:02:54.718 CC lib/nvme/nvme_auth.o 00:02:54.718 CC lib/init/subsystem_rpc.o 00:02:54.718 CC lib/virtio/virtio.o 00:02:54.978 CC lib/init/rpc.o 00:02:54.978 CC lib/virtio/virtio_vhost_user.o 00:02:54.978 CC lib/virtio/virtio_vfio_user.o 00:02:55.237 LIB libspdk_init.a 00:02:55.237 CC lib/virtio/virtio_pci.o 00:02:55.237 SO libspdk_init.so.6.0 00:02:55.237 CC lib/nvme/nvme_cuse.o 00:02:55.237 SYMLINK libspdk_init.so 00:02:55.237 CC lib/nvme/nvme_rdma.o 00:02:55.496 CC lib/fsdev/fsdev.o 00:02:55.496 CC lib/fsdev/fsdev_io.o 00:02:55.496 CC lib/event/app.o 00:02:55.496 LIB libspdk_virtio.a 00:02:55.496 SO libspdk_virtio.so.7.0 00:02:55.496 CC lib/event/reactor.o 00:02:55.496 SYMLINK libspdk_virtio.so 00:02:55.496 CC lib/event/log_rpc.o 00:02:55.496 LIB libspdk_accel.a 00:02:55.755 SO libspdk_accel.so.16.0 00:02:55.755 CC lib/event/app_rpc.o 00:02:55.755 SYMLINK libspdk_accel.so 00:02:55.755 CC lib/event/scheduler_static.o 00:02:55.755 CC lib/fsdev/fsdev_rpc.o 00:02:56.016 CC lib/bdev/bdev.o 00:02:56.016 CC lib/bdev/bdev_zone.o 00:02:56.016 CC lib/bdev/bdev_rpc.o 00:02:56.016 CC lib/bdev/scsi_nvme.o 00:02:56.016 CC lib/bdev/part.o 00:02:56.016 LIB libspdk_event.a 00:02:56.016 LIB libspdk_fsdev.a 00:02:56.016 SO libspdk_event.so.14.0 00:02:56.016 SO libspdk_fsdev.so.2.0 00:02:56.278 SYMLINK libspdk_event.so 00:02:56.278 SYMLINK libspdk_fsdev.so 00:02:56.542 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:56.802 LIB libspdk_nvme.a 00:02:57.062 SO libspdk_nvme.so.15.0 00:02:57.322 LIB libspdk_fuse_dispatcher.a 00:02:57.322 SO libspdk_fuse_dispatcher.so.1.0 00:02:57.322 SYMLINK libspdk_nvme.so 00:02:57.322 SYMLINK libspdk_fuse_dispatcher.so 00:02:57.892 LIB libspdk_blob.a 00:02:58.153 SO libspdk_blob.so.12.0 00:02:58.153 SYMLINK libspdk_blob.so 00:02:58.722 CC lib/lvol/lvol.o 00:02:58.722 CC lib/blobfs/tree.o 00:02:58.722 CC lib/blobfs/blobfs.o 00:02:58.983 LIB libspdk_bdev.a 00:02:58.983 SO libspdk_bdev.so.17.0 00:02:59.243 SYMLINK libspdk_bdev.so 00:02:59.243 CC lib/ublk/ublk.o 00:02:59.503 CC lib/ublk/ublk_rpc.o 00:02:59.503 CC lib/scsi/port.o 00:02:59.503 CC lib/scsi/lun.o 00:02:59.503 CC lib/nbd/nbd.o 00:02:59.503 CC lib/scsi/dev.o 00:02:59.503 CC lib/nvmf/ctrlr.o 00:02:59.503 CC lib/ftl/ftl_core.o 00:02:59.503 CC lib/ftl/ftl_init.o 00:02:59.503 LIB libspdk_blobfs.a 00:02:59.503 CC lib/ftl/ftl_layout.o 00:02:59.503 CC lib/ftl/ftl_debug.o 00:02:59.503 SO libspdk_blobfs.so.11.0 00:02:59.763 SYMLINK libspdk_blobfs.so 00:02:59.763 CC lib/scsi/scsi.o 00:02:59.763 CC lib/scsi/scsi_bdev.o 00:02:59.763 CC lib/scsi/scsi_pr.o 00:02:59.763 LIB libspdk_lvol.a 00:02:59.763 SO libspdk_lvol.so.11.0 00:02:59.763 CC lib/scsi/scsi_rpc.o 00:02:59.763 CC lib/nbd/nbd_rpc.o 00:02:59.763 CC lib/scsi/task.o 00:02:59.763 SYMLINK libspdk_lvol.so 00:02:59.763 CC lib/nvmf/ctrlr_discovery.o 00:02:59.763 CC lib/nvmf/ctrlr_bdev.o 00:03:00.024 CC lib/ftl/ftl_io.o 00:03:00.024 CC lib/ftl/ftl_sb.o 00:03:00.024 LIB libspdk_nbd.a 00:03:00.024 SO libspdk_nbd.so.7.0 00:03:00.024 CC lib/ftl/ftl_l2p.o 00:03:00.024 LIB libspdk_ublk.a 00:03:00.024 SYMLINK libspdk_nbd.so 00:03:00.024 CC lib/ftl/ftl_l2p_flat.o 00:03:00.024 CC lib/ftl/ftl_nv_cache.o 00:03:00.024 SO libspdk_ublk.so.3.0 00:03:00.024 CC lib/nvmf/subsystem.o 00:03:00.284 CC lib/nvmf/nvmf.o 00:03:00.284 SYMLINK libspdk_ublk.so 00:03:00.284 CC lib/nvmf/nvmf_rpc.o 00:03:00.284 CC lib/ftl/ftl_band.o 00:03:00.284 CC lib/ftl/ftl_band_ops.o 00:03:00.284 LIB libspdk_scsi.a 00:03:00.284 SO libspdk_scsi.so.9.0 00:03:00.543 CC lib/nvmf/transport.o 00:03:00.543 SYMLINK libspdk_scsi.so 00:03:00.543 CC lib/nvmf/tcp.o 00:03:00.806 CC lib/nvmf/stubs.o 00:03:00.806 CC lib/iscsi/conn.o 00:03:00.806 CC lib/vhost/vhost.o 00:03:01.065 CC lib/nvmf/mdns_server.o 00:03:01.065 CC lib/iscsi/init_grp.o 00:03:01.065 CC lib/iscsi/iscsi.o 00:03:01.065 CC lib/iscsi/param.o 00:03:01.065 CC lib/ftl/ftl_writer.o 00:03:01.325 CC lib/nvmf/rdma.o 00:03:01.325 CC lib/iscsi/portal_grp.o 00:03:01.326 CC lib/ftl/ftl_rq.o 00:03:01.326 CC lib/ftl/ftl_reloc.o 00:03:01.597 CC lib/nvmf/auth.o 00:03:01.597 CC lib/iscsi/tgt_node.o 00:03:01.597 CC lib/iscsi/iscsi_subsystem.o 00:03:01.597 CC lib/iscsi/iscsi_rpc.o 00:03:01.597 CC lib/vhost/vhost_rpc.o 00:03:01.856 CC lib/iscsi/task.o 00:03:01.856 CC lib/ftl/ftl_l2p_cache.o 00:03:01.856 CC lib/vhost/vhost_scsi.o 00:03:02.117 CC lib/vhost/vhost_blk.o 00:03:02.117 CC lib/ftl/ftl_p2l.o 00:03:02.117 CC lib/ftl/ftl_p2l_log.o 00:03:02.117 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.117 CC lib/vhost/rte_vhost_user.o 00:03:02.378 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.378 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.378 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.378 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.378 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.378 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:02.638 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:02.638 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:02.638 LIB libspdk_iscsi.a 00:03:02.638 SO libspdk_iscsi.so.8.0 00:03:02.638 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:02.638 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:02.638 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:02.638 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:02.638 CC lib/ftl/utils/ftl_conf.o 00:03:02.897 SYMLINK libspdk_iscsi.so 00:03:02.897 CC lib/ftl/utils/ftl_md.o 00:03:02.897 CC lib/ftl/utils/ftl_mempool.o 00:03:02.897 CC lib/ftl/utils/ftl_bitmap.o 00:03:02.897 CC lib/ftl/utils/ftl_property.o 00:03:02.897 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:02.897 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:02.897 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:02.897 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:03.157 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:03.157 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:03.157 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:03.157 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:03.157 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:03.157 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:03.157 LIB libspdk_vhost.a 00:03:03.157 SO libspdk_vhost.so.8.0 00:03:03.157 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:03.157 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:03.157 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:03.157 CC lib/ftl/base/ftl_base_dev.o 00:03:03.416 CC lib/ftl/base/ftl_base_bdev.o 00:03:03.416 CC lib/ftl/ftl_trace.o 00:03:03.416 SYMLINK libspdk_vhost.so 00:03:03.416 LIB libspdk_ftl.a 00:03:03.676 LIB libspdk_nvmf.a 00:03:03.676 SO libspdk_nvmf.so.20.0 00:03:03.676 SO libspdk_ftl.so.9.0 00:03:03.936 SYMLINK libspdk_nvmf.so 00:03:03.936 SYMLINK libspdk_ftl.so 00:03:04.506 CC module/env_dpdk/env_dpdk_rpc.o 00:03:04.506 CC module/accel/dsa/accel_dsa.o 00:03:04.506 CC module/accel/error/accel_error.o 00:03:04.506 CC module/fsdev/aio/fsdev_aio.o 00:03:04.506 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:04.506 CC module/keyring/linux/keyring.o 00:03:04.506 CC module/blob/bdev/blob_bdev.o 00:03:04.506 CC module/keyring/file/keyring.o 00:03:04.506 CC module/accel/ioat/accel_ioat.o 00:03:04.506 CC module/sock/posix/posix.o 00:03:04.506 LIB libspdk_env_dpdk_rpc.a 00:03:04.506 SO libspdk_env_dpdk_rpc.so.6.0 00:03:04.766 SYMLINK libspdk_env_dpdk_rpc.so 00:03:04.766 CC module/keyring/linux/keyring_rpc.o 00:03:04.766 CC module/accel/ioat/accel_ioat_rpc.o 00:03:04.766 CC module/keyring/file/keyring_rpc.o 00:03:04.766 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:04.766 CC module/accel/error/accel_error_rpc.o 00:03:04.766 LIB libspdk_scheduler_dynamic.a 00:03:04.766 SO libspdk_scheduler_dynamic.so.4.0 00:03:04.766 LIB libspdk_keyring_linux.a 00:03:04.766 LIB libspdk_accel_ioat.a 00:03:04.766 LIB libspdk_blob_bdev.a 00:03:04.766 LIB libspdk_keyring_file.a 00:03:04.766 CC module/accel/dsa/accel_dsa_rpc.o 00:03:04.766 SO libspdk_keyring_linux.so.1.0 00:03:04.766 SO libspdk_accel_ioat.so.6.0 00:03:04.766 SYMLINK libspdk_scheduler_dynamic.so 00:03:04.766 SO libspdk_blob_bdev.so.12.0 00:03:04.766 SO libspdk_keyring_file.so.2.0 00:03:04.766 LIB libspdk_accel_error.a 00:03:04.766 SYMLINK libspdk_keyring_linux.so 00:03:05.026 SYMLINK libspdk_accel_ioat.so 00:03:05.026 SYMLINK libspdk_blob_bdev.so 00:03:05.026 SO libspdk_accel_error.so.2.0 00:03:05.026 SYMLINK libspdk_keyring_file.so 00:03:05.026 CC module/fsdev/aio/linux_aio_mgr.o 00:03:05.026 SYMLINK libspdk_accel_error.so 00:03:05.026 LIB libspdk_accel_dsa.a 00:03:05.026 SO libspdk_accel_dsa.so.5.0 00:03:05.026 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:05.026 CC module/scheduler/gscheduler/gscheduler.o 00:03:05.026 SYMLINK libspdk_accel_dsa.so 00:03:05.026 CC module/accel/iaa/accel_iaa.o 00:03:05.026 CC module/accel/iaa/accel_iaa_rpc.o 00:03:05.286 CC module/bdev/delay/vbdev_delay.o 00:03:05.286 LIB libspdk_scheduler_dpdk_governor.a 00:03:05.286 LIB libspdk_scheduler_gscheduler.a 00:03:05.286 CC module/bdev/error/vbdev_error.o 00:03:05.286 CC module/blobfs/bdev/blobfs_bdev.o 00:03:05.286 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:05.286 SO libspdk_scheduler_gscheduler.so.4.0 00:03:05.286 CC module/bdev/gpt/gpt.o 00:03:05.286 CC module/bdev/gpt/vbdev_gpt.o 00:03:05.286 SYMLINK libspdk_scheduler_gscheduler.so 00:03:05.286 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:05.286 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:05.286 CC module/bdev/error/vbdev_error_rpc.o 00:03:05.286 LIB libspdk_fsdev_aio.a 00:03:05.286 LIB libspdk_accel_iaa.a 00:03:05.286 SO libspdk_accel_iaa.so.3.0 00:03:05.286 SO libspdk_fsdev_aio.so.1.0 00:03:05.286 LIB libspdk_sock_posix.a 00:03:05.286 SO libspdk_sock_posix.so.6.0 00:03:05.547 SYMLINK libspdk_accel_iaa.so 00:03:05.547 SYMLINK libspdk_fsdev_aio.so 00:03:05.547 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:05.547 LIB libspdk_blobfs_bdev.a 00:03:05.547 SYMLINK libspdk_sock_posix.so 00:03:05.547 SO libspdk_blobfs_bdev.so.6.0 00:03:05.547 LIB libspdk_bdev_error.a 00:03:05.547 SO libspdk_bdev_error.so.6.0 00:03:05.547 LIB libspdk_bdev_gpt.a 00:03:05.547 CC module/bdev/lvol/vbdev_lvol.o 00:03:05.547 SYMLINK libspdk_blobfs_bdev.so 00:03:05.547 SO libspdk_bdev_gpt.so.6.0 00:03:05.547 CC module/bdev/malloc/bdev_malloc.o 00:03:05.547 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:05.547 SYMLINK libspdk_bdev_error.so 00:03:05.547 CC module/bdev/null/bdev_null.o 00:03:05.547 LIB libspdk_bdev_delay.a 00:03:05.547 CC module/bdev/nvme/bdev_nvme.o 00:03:05.547 CC module/bdev/passthru/vbdev_passthru.o 00:03:05.547 SO libspdk_bdev_delay.so.6.0 00:03:05.547 SYMLINK libspdk_bdev_gpt.so 00:03:05.547 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:05.807 SYMLINK libspdk_bdev_delay.so 00:03:05.807 CC module/bdev/raid/bdev_raid.o 00:03:05.807 CC module/bdev/null/bdev_null_rpc.o 00:03:05.807 CC module/bdev/split/vbdev_split.o 00:03:05.807 CC module/bdev/nvme/nvme_rpc.o 00:03:05.807 CC module/bdev/nvme/bdev_mdns_client.o 00:03:05.807 LIB libspdk_bdev_null.a 00:03:06.066 SO libspdk_bdev_null.so.6.0 00:03:06.066 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:06.066 CC module/bdev/split/vbdev_split_rpc.o 00:03:06.066 SYMLINK libspdk_bdev_null.so 00:03:06.066 CC module/bdev/nvme/vbdev_opal.o 00:03:06.066 LIB libspdk_bdev_malloc.a 00:03:06.066 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:06.066 SO libspdk_bdev_malloc.so.6.0 00:03:06.066 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:06.066 SYMLINK libspdk_bdev_malloc.so 00:03:06.066 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:06.066 LIB libspdk_bdev_passthru.a 00:03:06.066 LIB libspdk_bdev_split.a 00:03:06.326 SO libspdk_bdev_split.so.6.0 00:03:06.326 SO libspdk_bdev_passthru.so.6.0 00:03:06.326 SYMLINK libspdk_bdev_split.so 00:03:06.326 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:06.326 SYMLINK libspdk_bdev_passthru.so 00:03:06.326 CC module/bdev/raid/bdev_raid_rpc.o 00:03:06.326 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:06.326 CC module/bdev/raid/bdev_raid_sb.o 00:03:06.326 CC module/bdev/raid/raid0.o 00:03:06.326 CC module/bdev/aio/bdev_aio.o 00:03:06.586 CC module/bdev/raid/raid1.o 00:03:06.586 CC module/bdev/aio/bdev_aio_rpc.o 00:03:06.586 LIB libspdk_bdev_lvol.a 00:03:06.586 CC module/bdev/raid/concat.o 00:03:06.586 SO libspdk_bdev_lvol.so.6.0 00:03:06.586 CC module/bdev/raid/raid5f.o 00:03:06.586 LIB libspdk_bdev_zone_block.a 00:03:06.586 SYMLINK libspdk_bdev_lvol.so 00:03:06.586 SO libspdk_bdev_zone_block.so.6.0 00:03:06.586 CC module/bdev/ftl/bdev_ftl.o 00:03:06.846 SYMLINK libspdk_bdev_zone_block.so 00:03:06.846 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:06.846 LIB libspdk_bdev_aio.a 00:03:06.846 SO libspdk_bdev_aio.so.6.0 00:03:06.846 CC module/bdev/iscsi/bdev_iscsi.o 00:03:06.846 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:06.846 SYMLINK libspdk_bdev_aio.so 00:03:07.130 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:07.130 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:07.130 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:07.130 LIB libspdk_bdev_ftl.a 00:03:07.130 SO libspdk_bdev_ftl.so.6.0 00:03:07.130 SYMLINK libspdk_bdev_ftl.so 00:03:07.130 LIB libspdk_bdev_raid.a 00:03:07.390 LIB libspdk_bdev_iscsi.a 00:03:07.390 SO libspdk_bdev_raid.so.6.0 00:03:07.390 SO libspdk_bdev_iscsi.so.6.0 00:03:07.390 SYMLINK libspdk_bdev_iscsi.so 00:03:07.390 SYMLINK libspdk_bdev_raid.so 00:03:07.649 LIB libspdk_bdev_virtio.a 00:03:07.649 SO libspdk_bdev_virtio.so.6.0 00:03:07.909 SYMLINK libspdk_bdev_virtio.so 00:03:08.846 LIB libspdk_bdev_nvme.a 00:03:09.105 SO libspdk_bdev_nvme.so.7.1 00:03:09.105 SYMLINK libspdk_bdev_nvme.so 00:03:10.042 CC module/event/subsystems/iobuf/iobuf.o 00:03:10.042 CC module/event/subsystems/sock/sock.o 00:03:10.042 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:10.042 CC module/event/subsystems/keyring/keyring.o 00:03:10.042 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:10.042 CC module/event/subsystems/fsdev/fsdev.o 00:03:10.042 CC module/event/subsystems/scheduler/scheduler.o 00:03:10.042 CC module/event/subsystems/vmd/vmd.o 00:03:10.042 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:10.042 LIB libspdk_event_vhost_blk.a 00:03:10.042 LIB libspdk_event_fsdev.a 00:03:10.042 LIB libspdk_event_scheduler.a 00:03:10.042 LIB libspdk_event_vmd.a 00:03:10.042 SO libspdk_event_vhost_blk.so.3.0 00:03:10.042 LIB libspdk_event_keyring.a 00:03:10.042 LIB libspdk_event_iobuf.a 00:03:10.042 SO libspdk_event_scheduler.so.4.0 00:03:10.042 LIB libspdk_event_sock.a 00:03:10.042 SO libspdk_event_fsdev.so.1.0 00:03:10.042 SO libspdk_event_vmd.so.6.0 00:03:10.042 SO libspdk_event_keyring.so.1.0 00:03:10.042 SO libspdk_event_iobuf.so.3.0 00:03:10.042 SO libspdk_event_sock.so.5.0 00:03:10.042 SYMLINK libspdk_event_vhost_blk.so 00:03:10.042 SYMLINK libspdk_event_fsdev.so 00:03:10.042 SYMLINK libspdk_event_scheduler.so 00:03:10.042 SYMLINK libspdk_event_vmd.so 00:03:10.042 SYMLINK libspdk_event_keyring.so 00:03:10.042 SYMLINK libspdk_event_sock.so 00:03:10.042 SYMLINK libspdk_event_iobuf.so 00:03:10.610 CC module/event/subsystems/accel/accel.o 00:03:10.610 LIB libspdk_event_accel.a 00:03:10.610 SO libspdk_event_accel.so.6.0 00:03:10.869 SYMLINK libspdk_event_accel.so 00:03:11.127 CC module/event/subsystems/bdev/bdev.o 00:03:11.385 LIB libspdk_event_bdev.a 00:03:11.385 SO libspdk_event_bdev.so.6.0 00:03:11.385 SYMLINK libspdk_event_bdev.so 00:03:11.953 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:11.953 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:11.953 CC module/event/subsystems/nbd/nbd.o 00:03:11.953 CC module/event/subsystems/ublk/ublk.o 00:03:11.953 CC module/event/subsystems/scsi/scsi.o 00:03:11.953 LIB libspdk_event_nbd.a 00:03:11.953 LIB libspdk_event_ublk.a 00:03:11.953 SO libspdk_event_nbd.so.6.0 00:03:11.953 LIB libspdk_event_scsi.a 00:03:11.953 SO libspdk_event_ublk.so.3.0 00:03:11.953 SO libspdk_event_scsi.so.6.0 00:03:11.953 SYMLINK libspdk_event_nbd.so 00:03:12.213 SYMLINK libspdk_event_ublk.so 00:03:12.213 LIB libspdk_event_nvmf.a 00:03:12.213 SYMLINK libspdk_event_scsi.so 00:03:12.213 SO libspdk_event_nvmf.so.6.0 00:03:12.213 SYMLINK libspdk_event_nvmf.so 00:03:12.472 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:12.472 CC module/event/subsystems/iscsi/iscsi.o 00:03:12.732 LIB libspdk_event_vhost_scsi.a 00:03:12.732 LIB libspdk_event_iscsi.a 00:03:12.732 SO libspdk_event_vhost_scsi.so.3.0 00:03:12.732 SO libspdk_event_iscsi.so.6.0 00:03:12.732 SYMLINK libspdk_event_iscsi.so 00:03:12.732 SYMLINK libspdk_event_vhost_scsi.so 00:03:12.992 SO libspdk.so.6.0 00:03:12.992 SYMLINK libspdk.so 00:03:13.590 CC test/rpc_client/rpc_client_test.o 00:03:13.590 TEST_HEADER include/spdk/accel.h 00:03:13.590 TEST_HEADER include/spdk/accel_module.h 00:03:13.590 TEST_HEADER include/spdk/assert.h 00:03:13.590 TEST_HEADER include/spdk/barrier.h 00:03:13.590 CXX app/trace/trace.o 00:03:13.590 TEST_HEADER include/spdk/base64.h 00:03:13.590 TEST_HEADER include/spdk/bdev.h 00:03:13.590 CC app/trace_record/trace_record.o 00:03:13.590 TEST_HEADER include/spdk/bdev_module.h 00:03:13.590 TEST_HEADER include/spdk/bdev_zone.h 00:03:13.590 TEST_HEADER include/spdk/bit_array.h 00:03:13.590 TEST_HEADER include/spdk/bit_pool.h 00:03:13.590 TEST_HEADER include/spdk/blob_bdev.h 00:03:13.590 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:13.590 TEST_HEADER include/spdk/blobfs.h 00:03:13.590 TEST_HEADER include/spdk/blob.h 00:03:13.590 TEST_HEADER include/spdk/conf.h 00:03:13.590 TEST_HEADER include/spdk/config.h 00:03:13.590 TEST_HEADER include/spdk/cpuset.h 00:03:13.590 TEST_HEADER include/spdk/crc16.h 00:03:13.590 TEST_HEADER include/spdk/crc32.h 00:03:13.590 TEST_HEADER include/spdk/crc64.h 00:03:13.590 CC app/nvmf_tgt/nvmf_main.o 00:03:13.590 TEST_HEADER include/spdk/dif.h 00:03:13.590 TEST_HEADER include/spdk/dma.h 00:03:13.590 TEST_HEADER include/spdk/endian.h 00:03:13.590 TEST_HEADER include/spdk/env_dpdk.h 00:03:13.590 TEST_HEADER include/spdk/env.h 00:03:13.590 TEST_HEADER include/spdk/event.h 00:03:13.590 TEST_HEADER include/spdk/fd_group.h 00:03:13.590 TEST_HEADER include/spdk/fd.h 00:03:13.590 TEST_HEADER include/spdk/file.h 00:03:13.590 TEST_HEADER include/spdk/fsdev.h 00:03:13.590 TEST_HEADER include/spdk/fsdev_module.h 00:03:13.590 TEST_HEADER include/spdk/ftl.h 00:03:13.590 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:13.590 TEST_HEADER include/spdk/gpt_spec.h 00:03:13.590 TEST_HEADER include/spdk/hexlify.h 00:03:13.590 TEST_HEADER include/spdk/histogram_data.h 00:03:13.590 TEST_HEADER include/spdk/idxd.h 00:03:13.590 TEST_HEADER include/spdk/idxd_spec.h 00:03:13.590 TEST_HEADER include/spdk/init.h 00:03:13.590 TEST_HEADER include/spdk/ioat.h 00:03:13.590 CC examples/util/zipf/zipf.o 00:03:13.590 TEST_HEADER include/spdk/ioat_spec.h 00:03:13.590 TEST_HEADER include/spdk/iscsi_spec.h 00:03:13.590 TEST_HEADER include/spdk/json.h 00:03:13.590 CC test/thread/poller_perf/poller_perf.o 00:03:13.590 TEST_HEADER include/spdk/jsonrpc.h 00:03:13.590 TEST_HEADER include/spdk/keyring.h 00:03:13.590 TEST_HEADER include/spdk/keyring_module.h 00:03:13.590 TEST_HEADER include/spdk/likely.h 00:03:13.590 TEST_HEADER include/spdk/log.h 00:03:13.590 TEST_HEADER include/spdk/lvol.h 00:03:13.590 TEST_HEADER include/spdk/md5.h 00:03:13.590 TEST_HEADER include/spdk/memory.h 00:03:13.590 TEST_HEADER include/spdk/mmio.h 00:03:13.590 TEST_HEADER include/spdk/nbd.h 00:03:13.590 CC test/dma/test_dma/test_dma.o 00:03:13.590 TEST_HEADER include/spdk/net.h 00:03:13.590 TEST_HEADER include/spdk/notify.h 00:03:13.590 TEST_HEADER include/spdk/nvme.h 00:03:13.590 TEST_HEADER include/spdk/nvme_intel.h 00:03:13.590 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:13.590 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:13.590 TEST_HEADER include/spdk/nvme_spec.h 00:03:13.590 TEST_HEADER include/spdk/nvme_zns.h 00:03:13.590 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:13.590 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:13.590 TEST_HEADER include/spdk/nvmf.h 00:03:13.590 TEST_HEADER include/spdk/nvmf_spec.h 00:03:13.590 CC test/app/bdev_svc/bdev_svc.o 00:03:13.590 TEST_HEADER include/spdk/nvmf_transport.h 00:03:13.590 TEST_HEADER include/spdk/opal.h 00:03:13.590 TEST_HEADER include/spdk/opal_spec.h 00:03:13.590 TEST_HEADER include/spdk/pci_ids.h 00:03:13.590 TEST_HEADER include/spdk/pipe.h 00:03:13.590 TEST_HEADER include/spdk/queue.h 00:03:13.590 TEST_HEADER include/spdk/reduce.h 00:03:13.590 TEST_HEADER include/spdk/rpc.h 00:03:13.590 TEST_HEADER include/spdk/scheduler.h 00:03:13.590 CC test/env/mem_callbacks/mem_callbacks.o 00:03:13.590 TEST_HEADER include/spdk/scsi.h 00:03:13.590 TEST_HEADER include/spdk/scsi_spec.h 00:03:13.590 TEST_HEADER include/spdk/sock.h 00:03:13.590 TEST_HEADER include/spdk/stdinc.h 00:03:13.590 TEST_HEADER include/spdk/string.h 00:03:13.590 TEST_HEADER include/spdk/thread.h 00:03:13.590 TEST_HEADER include/spdk/trace.h 00:03:13.590 TEST_HEADER include/spdk/trace_parser.h 00:03:13.590 TEST_HEADER include/spdk/tree.h 00:03:13.590 TEST_HEADER include/spdk/ublk.h 00:03:13.590 TEST_HEADER include/spdk/util.h 00:03:13.590 TEST_HEADER include/spdk/uuid.h 00:03:13.590 TEST_HEADER include/spdk/version.h 00:03:13.590 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:13.590 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:13.590 TEST_HEADER include/spdk/vhost.h 00:03:13.590 TEST_HEADER include/spdk/vmd.h 00:03:13.590 TEST_HEADER include/spdk/xor.h 00:03:13.590 TEST_HEADER include/spdk/zipf.h 00:03:13.590 CXX test/cpp_headers/accel.o 00:03:13.590 LINK rpc_client_test 00:03:13.590 LINK nvmf_tgt 00:03:13.590 LINK poller_perf 00:03:13.857 LINK zipf 00:03:13.857 LINK spdk_trace_record 00:03:13.857 LINK bdev_svc 00:03:13.857 CXX test/cpp_headers/accel_module.o 00:03:13.857 LINK spdk_trace 00:03:13.857 CC test/app/histogram_perf/histogram_perf.o 00:03:14.116 CXX test/cpp_headers/assert.o 00:03:14.116 CC test/app/jsoncat/jsoncat.o 00:03:14.116 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:14.116 CC examples/ioat/perf/perf.o 00:03:14.116 CC examples/vmd/lsvmd/lsvmd.o 00:03:14.116 LINK histogram_perf 00:03:14.116 LINK jsoncat 00:03:14.116 LINK test_dma 00:03:14.116 CC examples/idxd/perf/perf.o 00:03:14.117 CXX test/cpp_headers/barrier.o 00:03:14.117 LINK mem_callbacks 00:03:14.376 CC app/iscsi_tgt/iscsi_tgt.o 00:03:14.376 LINK lsvmd 00:03:14.376 CXX test/cpp_headers/base64.o 00:03:14.376 CC examples/vmd/led/led.o 00:03:14.376 LINK ioat_perf 00:03:14.376 CC test/env/vtophys/vtophys.o 00:03:14.376 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:14.376 LINK iscsi_tgt 00:03:14.636 LINK nvme_fuzz 00:03:14.636 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:14.636 CXX test/cpp_headers/bdev.o 00:03:14.636 LINK led 00:03:14.636 LINK idxd_perf 00:03:14.636 LINK vtophys 00:03:14.636 CC examples/thread/thread/thread_ex.o 00:03:14.636 CC examples/ioat/verify/verify.o 00:03:14.636 LINK interrupt_tgt 00:03:14.636 LINK env_dpdk_post_init 00:03:14.896 CXX test/cpp_headers/bdev_module.o 00:03:14.896 CC app/spdk_tgt/spdk_tgt.o 00:03:14.896 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:14.896 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:14.896 LINK verify 00:03:14.896 LINK thread 00:03:14.896 CC test/env/memory/memory_ut.o 00:03:14.896 CC examples/sock/hello_world/hello_sock.o 00:03:14.896 CXX test/cpp_headers/bdev_zone.o 00:03:14.896 CC test/event/event_perf/event_perf.o 00:03:15.156 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:15.156 CC test/nvme/aer/aer.o 00:03:15.156 LINK spdk_tgt 00:03:15.156 CXX test/cpp_headers/bit_array.o 00:03:15.156 CC test/event/reactor/reactor.o 00:03:15.156 LINK event_perf 00:03:15.156 CC test/nvme/reset/reset.o 00:03:15.156 LINK hello_sock 00:03:15.415 LINK reactor 00:03:15.415 CXX test/cpp_headers/bit_pool.o 00:03:15.415 LINK aer 00:03:15.415 CC app/spdk_lspci/spdk_lspci.o 00:03:15.415 CC test/env/pci/pci_ut.o 00:03:15.415 LINK vhost_fuzz 00:03:15.415 CXX test/cpp_headers/blob_bdev.o 00:03:15.415 LINK reset 00:03:15.674 CC examples/accel/perf/accel_perf.o 00:03:15.674 LINK spdk_lspci 00:03:15.674 CC test/event/reactor_perf/reactor_perf.o 00:03:15.675 CXX test/cpp_headers/blobfs_bdev.o 00:03:15.675 CXX test/cpp_headers/blobfs.o 00:03:15.675 CC test/event/app_repeat/app_repeat.o 00:03:15.934 LINK reactor_perf 00:03:15.934 CC test/nvme/sgl/sgl.o 00:03:15.934 LINK app_repeat 00:03:15.934 CXX test/cpp_headers/blob.o 00:03:15.934 CC app/spdk_nvme_perf/perf.o 00:03:15.934 LINK pci_ut 00:03:15.934 CC app/spdk_nvme_identify/identify.o 00:03:15.934 CXX test/cpp_headers/conf.o 00:03:16.193 CC test/event/scheduler/scheduler.o 00:03:16.193 LINK accel_perf 00:03:16.193 CXX test/cpp_headers/config.o 00:03:16.193 LINK sgl 00:03:16.193 CXX test/cpp_headers/cpuset.o 00:03:16.193 LINK memory_ut 00:03:16.193 CC test/accel/dif/dif.o 00:03:16.453 CXX test/cpp_headers/crc16.o 00:03:16.453 LINK scheduler 00:03:16.453 CC test/nvme/e2edp/nvme_dp.o 00:03:16.453 CC test/blobfs/mkfs/mkfs.o 00:03:16.453 CC examples/blob/hello_world/hello_blob.o 00:03:16.453 CXX test/cpp_headers/crc32.o 00:03:16.712 CC examples/blob/cli/blobcli.o 00:03:16.712 CXX test/cpp_headers/crc64.o 00:03:16.712 LINK mkfs 00:03:16.712 LINK nvme_dp 00:03:16.712 LINK hello_blob 00:03:16.712 CC test/nvme/overhead/overhead.o 00:03:16.971 CXX test/cpp_headers/dif.o 00:03:16.971 LINK spdk_nvme_perf 00:03:16.971 LINK iscsi_fuzz 00:03:16.971 CXX test/cpp_headers/dma.o 00:03:16.971 CC test/nvme/err_injection/err_injection.o 00:03:17.229 CC examples/nvme/hello_world/hello_world.o 00:03:17.229 LINK dif 00:03:17.229 LINK spdk_nvme_identify 00:03:17.229 LINK overhead 00:03:17.229 CXX test/cpp_headers/endian.o 00:03:17.229 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:17.229 LINK blobcli 00:03:17.229 LINK err_injection 00:03:17.229 CC examples/nvme/reconnect/reconnect.o 00:03:17.229 CXX test/cpp_headers/env_dpdk.o 00:03:17.493 CC test/app/stub/stub.o 00:03:17.493 LINK hello_world 00:03:17.493 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:17.493 CC app/spdk_nvme_discover/discovery_aer.o 00:03:17.493 CC test/nvme/startup/startup.o 00:03:17.493 CXX test/cpp_headers/env.o 00:03:17.493 LINK hello_fsdev 00:03:17.493 CC app/spdk_top/spdk_top.o 00:03:17.493 LINK stub 00:03:17.752 CC app/vhost/vhost.o 00:03:17.752 LINK startup 00:03:17.752 LINK spdk_nvme_discover 00:03:17.752 LINK reconnect 00:03:17.752 CXX test/cpp_headers/event.o 00:03:17.752 LINK vhost 00:03:17.752 CC examples/bdev/hello_world/hello_bdev.o 00:03:18.011 CXX test/cpp_headers/fd_group.o 00:03:18.011 CC test/nvme/reserve/reserve.o 00:03:18.011 CC examples/bdev/bdevperf/bdevperf.o 00:03:18.011 CC examples/nvme/arbitration/arbitration.o 00:03:18.011 CC test/lvol/esnap/esnap.o 00:03:18.011 CC examples/nvme/hotplug/hotplug.o 00:03:18.011 LINK nvme_manage 00:03:18.011 LINK hello_bdev 00:03:18.011 CXX test/cpp_headers/fd.o 00:03:18.271 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:18.271 LINK reserve 00:03:18.271 CXX test/cpp_headers/file.o 00:03:18.271 CXX test/cpp_headers/fsdev.o 00:03:18.271 LINK hotplug 00:03:18.531 CXX test/cpp_headers/fsdev_module.o 00:03:18.531 LINK cmb_copy 00:03:18.531 LINK arbitration 00:03:18.531 CC test/nvme/simple_copy/simple_copy.o 00:03:18.531 CC examples/nvme/abort/abort.o 00:03:18.531 CXX test/cpp_headers/ftl.o 00:03:18.531 CXX test/cpp_headers/fuse_dispatcher.o 00:03:18.531 CXX test/cpp_headers/gpt_spec.o 00:03:18.531 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:18.791 CXX test/cpp_headers/hexlify.o 00:03:18.791 CXX test/cpp_headers/histogram_data.o 00:03:18.791 LINK spdk_top 00:03:18.791 LINK simple_copy 00:03:18.791 LINK pmr_persistence 00:03:18.791 CC test/nvme/connect_stress/connect_stress.o 00:03:19.053 CC test/nvme/boot_partition/boot_partition.o 00:03:19.053 CXX test/cpp_headers/idxd.o 00:03:19.053 CC test/bdev/bdevio/bdevio.o 00:03:19.053 LINK bdevperf 00:03:19.053 LINK abort 00:03:19.053 CC app/spdk_dd/spdk_dd.o 00:03:19.053 LINK connect_stress 00:03:19.312 LINK boot_partition 00:03:19.313 CXX test/cpp_headers/idxd_spec.o 00:03:19.313 CC test/nvme/compliance/nvme_compliance.o 00:03:19.313 CC app/fio/nvme/fio_plugin.o 00:03:19.313 CXX test/cpp_headers/init.o 00:03:19.313 CC test/nvme/fused_ordering/fused_ordering.o 00:03:19.313 CC app/fio/bdev/fio_plugin.o 00:03:19.572 LINK bdevio 00:03:19.572 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:19.572 LINK spdk_dd 00:03:19.572 CC examples/nvmf/nvmf/nvmf.o 00:03:19.572 CXX test/cpp_headers/ioat.o 00:03:19.572 LINK nvme_compliance 00:03:19.572 LINK fused_ordering 00:03:19.831 LINK doorbell_aers 00:03:19.831 CXX test/cpp_headers/ioat_spec.o 00:03:19.831 CXX test/cpp_headers/iscsi_spec.o 00:03:19.831 CXX test/cpp_headers/json.o 00:03:19.831 CXX test/cpp_headers/jsonrpc.o 00:03:19.831 CC test/nvme/fdp/fdp.o 00:03:20.091 CC test/nvme/cuse/cuse.o 00:03:20.091 LINK nvmf 00:03:20.091 LINK spdk_nvme 00:03:20.091 CXX test/cpp_headers/keyring.o 00:03:20.091 LINK spdk_bdev 00:03:20.091 CXX test/cpp_headers/keyring_module.o 00:03:20.091 CXX test/cpp_headers/likely.o 00:03:20.091 CXX test/cpp_headers/log.o 00:03:20.091 CXX test/cpp_headers/lvol.o 00:03:20.091 CXX test/cpp_headers/md5.o 00:03:20.091 CXX test/cpp_headers/memory.o 00:03:20.350 CXX test/cpp_headers/mmio.o 00:03:20.350 CXX test/cpp_headers/nbd.o 00:03:20.350 CXX test/cpp_headers/net.o 00:03:20.350 CXX test/cpp_headers/notify.o 00:03:20.350 CXX test/cpp_headers/nvme.o 00:03:20.350 CXX test/cpp_headers/nvme_intel.o 00:03:20.350 LINK fdp 00:03:20.350 CXX test/cpp_headers/nvme_ocssd.o 00:03:20.350 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:20.350 CXX test/cpp_headers/nvme_spec.o 00:03:20.350 CXX test/cpp_headers/nvme_zns.o 00:03:20.610 CXX test/cpp_headers/nvmf_cmd.o 00:03:20.610 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:20.610 CXX test/cpp_headers/nvmf.o 00:03:20.610 CXX test/cpp_headers/nvmf_spec.o 00:03:20.610 CXX test/cpp_headers/nvmf_transport.o 00:03:20.610 CXX test/cpp_headers/opal.o 00:03:20.610 CXX test/cpp_headers/opal_spec.o 00:03:20.610 CXX test/cpp_headers/pci_ids.o 00:03:20.610 CXX test/cpp_headers/pipe.o 00:03:20.610 CXX test/cpp_headers/queue.o 00:03:20.869 CXX test/cpp_headers/reduce.o 00:03:20.869 CXX test/cpp_headers/rpc.o 00:03:20.869 CXX test/cpp_headers/scheduler.o 00:03:20.869 CXX test/cpp_headers/scsi.o 00:03:20.869 CXX test/cpp_headers/scsi_spec.o 00:03:20.869 CXX test/cpp_headers/sock.o 00:03:20.869 CXX test/cpp_headers/stdinc.o 00:03:20.869 CXX test/cpp_headers/string.o 00:03:20.869 CXX test/cpp_headers/thread.o 00:03:20.869 CXX test/cpp_headers/trace.o 00:03:20.869 CXX test/cpp_headers/trace_parser.o 00:03:21.129 CXX test/cpp_headers/tree.o 00:03:21.129 CXX test/cpp_headers/ublk.o 00:03:21.129 CXX test/cpp_headers/util.o 00:03:21.129 CXX test/cpp_headers/uuid.o 00:03:21.129 CXX test/cpp_headers/version.o 00:03:21.129 CXX test/cpp_headers/vfio_user_pci.o 00:03:21.129 CXX test/cpp_headers/vfio_user_spec.o 00:03:21.129 CXX test/cpp_headers/vhost.o 00:03:21.129 CXX test/cpp_headers/vmd.o 00:03:21.129 CXX test/cpp_headers/xor.o 00:03:21.129 CXX test/cpp_headers/zipf.o 00:03:21.699 LINK cuse 00:03:24.994 LINK esnap 00:03:25.563 00:03:25.563 real 1m29.235s 00:03:25.563 user 7m48.729s 00:03:25.563 sys 1m47.303s 00:03:25.563 17:48:07 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:25.563 17:48:07 make -- common/autotest_common.sh@10 -- $ set +x 00:03:25.563 ************************************ 00:03:25.563 END TEST make 00:03:25.563 ************************************ 00:03:25.563 17:48:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:25.563 17:48:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:25.563 17:48:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:25.563 17:48:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.563 17:48:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:25.563 17:48:07 -- pm/common@44 -- $ pid=5465 00:03:25.563 17:48:07 -- pm/common@50 -- $ kill -TERM 5465 00:03:25.563 17:48:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.563 17:48:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:25.563 17:48:07 -- pm/common@44 -- $ pid=5467 00:03:25.563 17:48:07 -- pm/common@50 -- $ kill -TERM 5467 00:03:25.563 17:48:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:25.563 17:48:07 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:25.563 17:48:07 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:25.563 17:48:07 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:25.563 17:48:07 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:25.563 17:48:07 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:25.563 17:48:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:25.563 17:48:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:25.563 17:48:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:25.563 17:48:07 -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.563 17:48:07 -- scripts/common.sh@336 -- # read -ra ver1 00:03:25.563 17:48:07 -- scripts/common.sh@337 -- # IFS=.-: 00:03:25.563 17:48:07 -- scripts/common.sh@337 -- # read -ra ver2 00:03:25.563 17:48:07 -- scripts/common.sh@338 -- # local 'op=<' 00:03:25.563 17:48:07 -- scripts/common.sh@340 -- # ver1_l=2 00:03:25.563 17:48:07 -- scripts/common.sh@341 -- # ver2_l=1 00:03:25.563 17:48:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:25.563 17:48:07 -- scripts/common.sh@344 -- # case "$op" in 00:03:25.563 17:48:07 -- scripts/common.sh@345 -- # : 1 00:03:25.563 17:48:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:25.563 17:48:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.563 17:48:07 -- scripts/common.sh@365 -- # decimal 1 00:03:25.563 17:48:07 -- scripts/common.sh@353 -- # local d=1 00:03:25.563 17:48:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.563 17:48:07 -- scripts/common.sh@355 -- # echo 1 00:03:25.563 17:48:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:25.563 17:48:07 -- scripts/common.sh@366 -- # decimal 2 00:03:25.563 17:48:07 -- scripts/common.sh@353 -- # local d=2 00:03:25.563 17:48:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.563 17:48:07 -- scripts/common.sh@355 -- # echo 2 00:03:25.563 17:48:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:25.563 17:48:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:25.563 17:48:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:25.563 17:48:07 -- scripts/common.sh@368 -- # return 0 00:03:25.563 17:48:07 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.563 17:48:07 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:25.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.563 --rc genhtml_branch_coverage=1 00:03:25.563 --rc genhtml_function_coverage=1 00:03:25.563 --rc genhtml_legend=1 00:03:25.563 --rc geninfo_all_blocks=1 00:03:25.563 --rc geninfo_unexecuted_blocks=1 00:03:25.563 00:03:25.563 ' 00:03:25.563 17:48:07 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:25.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.563 --rc genhtml_branch_coverage=1 00:03:25.563 --rc genhtml_function_coverage=1 00:03:25.563 --rc genhtml_legend=1 00:03:25.564 --rc geninfo_all_blocks=1 00:03:25.564 --rc geninfo_unexecuted_blocks=1 00:03:25.564 00:03:25.564 ' 00:03:25.564 17:48:07 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:25.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.564 --rc genhtml_branch_coverage=1 00:03:25.564 --rc genhtml_function_coverage=1 00:03:25.564 --rc genhtml_legend=1 00:03:25.564 --rc geninfo_all_blocks=1 00:03:25.564 --rc geninfo_unexecuted_blocks=1 00:03:25.564 00:03:25.564 ' 00:03:25.564 17:48:07 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:25.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.564 --rc genhtml_branch_coverage=1 00:03:25.564 --rc genhtml_function_coverage=1 00:03:25.564 --rc genhtml_legend=1 00:03:25.564 --rc geninfo_all_blocks=1 00:03:25.564 --rc geninfo_unexecuted_blocks=1 00:03:25.564 00:03:25.564 ' 00:03:25.564 17:48:07 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:25.564 17:48:07 -- nvmf/common.sh@7 -- # uname -s 00:03:25.564 17:48:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:25.564 17:48:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:25.564 17:48:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:25.564 17:48:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:25.564 17:48:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:25.564 17:48:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:25.564 17:48:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:25.564 17:48:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:25.564 17:48:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:25.564 17:48:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:25.822 17:48:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b590854-7bd7-4381-93fd-b908217718d3 00:03:25.822 17:48:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b590854-7bd7-4381-93fd-b908217718d3 00:03:25.822 17:48:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:25.822 17:48:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:25.822 17:48:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:25.822 17:48:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:25.822 17:48:07 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:25.822 17:48:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:25.822 17:48:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:25.822 17:48:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:25.822 17:48:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:25.822 17:48:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.822 17:48:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.822 17:48:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.822 17:48:07 -- paths/export.sh@5 -- # export PATH 00:03:25.822 17:48:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.822 17:48:07 -- nvmf/common.sh@51 -- # : 0 00:03:25.822 17:48:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:25.822 17:48:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:25.822 17:48:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:25.822 17:48:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:25.822 17:48:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:25.822 17:48:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:25.822 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:25.822 17:48:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:25.822 17:48:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:25.822 17:48:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:25.822 17:48:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.822 17:48:07 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.822 17:48:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:25.822 17:48:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:25.822 17:48:07 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.822 17:48:07 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:25.822 17:48:07 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.822 17:48:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:25.822 17:48:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:25.822 17:48:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:25.822 17:48:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:25.822 17:48:07 -- spdk/autotest.sh@48 -- # udevadm_pid=54472 00:03:25.822 17:48:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:25.822 17:48:07 -- pm/common@17 -- # local monitor 00:03:25.822 17:48:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.822 17:48:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.822 17:48:07 -- pm/common@25 -- # sleep 1 00:03:25.822 17:48:07 -- pm/common@21 -- # date +%s 00:03:25.822 17:48:07 -- pm/common@21 -- # date +%s 00:03:25.822 17:48:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732643287 00:03:25.822 17:48:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732643287 00:03:25.822 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732643287_collect-vmstat.pm.log 00:03:25.822 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732643287_collect-cpu-load.pm.log 00:03:26.758 17:48:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:26.758 17:48:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:26.758 17:48:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:26.758 17:48:08 -- common/autotest_common.sh@10 -- # set +x 00:03:26.758 17:48:08 -- spdk/autotest.sh@59 -- # create_test_list 00:03:26.758 17:48:08 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:26.758 17:48:08 -- common/autotest_common.sh@10 -- # set +x 00:03:26.758 17:48:08 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:26.758 17:48:08 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:26.758 17:48:08 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:26.758 17:48:08 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:26.758 17:48:08 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:26.758 17:48:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:26.758 17:48:08 -- common/autotest_common.sh@1457 -- # uname 00:03:26.758 17:48:08 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:26.758 17:48:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:27.017 17:48:08 -- common/autotest_common.sh@1477 -- # uname 00:03:27.017 17:48:08 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:27.017 17:48:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:27.017 17:48:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:27.017 lcov: LCOV version 1.15 00:03:27.018 17:48:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:42.024 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:42.025 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:00.126 17:48:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:00.126 17:48:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.126 17:48:40 -- common/autotest_common.sh@10 -- # set +x 00:04:00.126 17:48:40 -- spdk/autotest.sh@78 -- # rm -f 00:04:00.126 17:48:40 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.126 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.126 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:00.126 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:00.126 17:48:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:00.126 17:48:41 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:00.126 17:48:41 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:00.126 17:48:41 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:00.126 17:48:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:00.126 17:48:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:00.126 17:48:41 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:00.126 17:48:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:00.126 17:48:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.126 17:48:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:00.126 17:48:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:00.126 17:48:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:00.126 17:48:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:00.126 17:48:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.126 17:48:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:00.126 17:48:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:00.126 17:48:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:00.126 17:48:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:00.126 17:48:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.126 17:48:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:00.126 17:48:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:00.126 17:48:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:00.126 17:48:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:00.126 17:48:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.126 17:48:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:00.126 17:48:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.126 17:48:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.126 17:48:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:00.126 17:48:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:00.126 17:48:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:00.126 No valid GPT data, bailing 00:04:00.126 17:48:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:00.126 17:48:41 -- scripts/common.sh@394 -- # pt= 00:04:00.126 17:48:41 -- scripts/common.sh@395 -- # return 1 00:04:00.126 17:48:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:00.126 1+0 records in 00:04:00.127 1+0 records out 00:04:00.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435953 s, 241 MB/s 00:04:00.127 17:48:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.127 17:48:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.127 17:48:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:00.127 17:48:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:00.127 17:48:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:00.127 No valid GPT data, bailing 00:04:00.127 17:48:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:00.127 17:48:41 -- scripts/common.sh@394 -- # pt= 00:04:00.127 17:48:41 -- scripts/common.sh@395 -- # return 1 00:04:00.127 17:48:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:00.127 1+0 records in 00:04:00.127 1+0 records out 00:04:00.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516636 s, 203 MB/s 00:04:00.127 17:48:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.127 17:48:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.127 17:48:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:00.127 17:48:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:00.127 17:48:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:00.127 No valid GPT data, bailing 00:04:00.127 17:48:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:00.127 17:48:41 -- scripts/common.sh@394 -- # pt= 00:04:00.127 17:48:41 -- scripts/common.sh@395 -- # return 1 00:04:00.127 17:48:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:00.127 1+0 records in 00:04:00.127 1+0 records out 00:04:00.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00626643 s, 167 MB/s 00:04:00.127 17:48:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.127 17:48:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.127 17:48:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:00.127 17:48:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:00.127 17:48:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:00.127 No valid GPT data, bailing 00:04:00.127 17:48:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:00.127 17:48:41 -- scripts/common.sh@394 -- # pt= 00:04:00.127 17:48:41 -- scripts/common.sh@395 -- # return 1 00:04:00.127 17:48:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:00.127 1+0 records in 00:04:00.127 1+0 records out 00:04:00.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00634021 s, 165 MB/s 00:04:00.127 17:48:41 -- spdk/autotest.sh@105 -- # sync 00:04:00.127 17:48:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:00.127 17:48:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:00.127 17:48:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:02.669 17:48:44 -- spdk/autotest.sh@111 -- # uname -s 00:04:02.669 17:48:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:02.669 17:48:44 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:02.669 17:48:44 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:03.603 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.603 Hugepages 00:04:03.603 node hugesize free / total 00:04:03.603 node0 1048576kB 0 / 0 00:04:03.603 node0 2048kB 0 / 0 00:04:03.603 00:04:03.603 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:03.603 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:03.603 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:03.862 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:03.862 17:48:45 -- spdk/autotest.sh@117 -- # uname -s 00:04:03.862 17:48:45 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:03.862 17:48:45 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:03.862 17:48:45 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.796 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.796 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.796 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.796 17:48:46 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:05.735 17:48:47 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:05.735 17:48:47 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:05.735 17:48:47 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:05.735 17:48:47 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:05.735 17:48:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:05.735 17:48:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:05.735 17:48:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.735 17:48:47 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:05.735 17:48:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:05.994 17:48:47 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:05.994 17:48:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:05.994 17:48:47 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:06.564 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.564 Waiting for block devices as requested 00:04:06.564 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:06.564 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:06.824 17:48:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:06.824 17:48:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:06.824 17:48:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:06.824 17:48:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:06.824 17:48:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:06.824 17:48:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:06.824 17:48:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:06.824 17:48:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:06.824 17:48:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:06.824 17:48:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:06.824 17:48:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:06.825 17:48:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:06.825 17:48:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:06.825 17:48:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:06.825 17:48:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:06.825 17:48:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:06.825 17:48:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:06.825 17:48:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:06.825 17:48:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:06.825 17:48:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:06.825 17:48:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:06.825 17:48:48 -- common/autotest_common.sh@1543 -- # continue 00:04:06.825 17:48:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:06.825 17:48:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:06.825 17:48:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:06.825 17:48:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:06.825 17:48:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:06.825 17:48:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:06.825 17:48:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:06.825 17:48:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:06.825 17:48:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:06.825 17:48:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:06.825 17:48:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:06.825 17:48:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:06.825 17:48:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:06.825 17:48:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:06.825 17:48:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:06.825 17:48:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:06.825 17:48:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:06.825 17:48:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:06.825 17:48:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:06.825 17:48:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:06.825 17:48:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:06.825 17:48:48 -- common/autotest_common.sh@1543 -- # continue 00:04:06.825 17:48:48 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:06.825 17:48:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.825 17:48:48 -- common/autotest_common.sh@10 -- # set +x 00:04:06.825 17:48:48 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:06.825 17:48:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.825 17:48:48 -- common/autotest_common.sh@10 -- # set +x 00:04:06.825 17:48:48 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.763 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.763 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.022 17:48:49 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:08.022 17:48:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.022 17:48:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.022 17:48:49 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:08.022 17:48:49 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:08.022 17:48:49 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:08.022 17:48:49 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:08.022 17:48:49 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:08.022 17:48:49 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:08.022 17:48:49 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:08.022 17:48:49 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:08.022 17:48:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:08.022 17:48:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:08.022 17:48:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:08.022 17:48:49 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:08.022 17:48:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:08.022 17:48:49 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:08.022 17:48:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:08.022 17:48:49 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:08.022 17:48:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:08.022 17:48:49 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:08.022 17:48:49 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:08.022 17:48:49 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:08.022 17:48:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:08.022 17:48:49 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:08.022 17:48:49 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:08.022 17:48:49 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:08.022 17:48:49 -- common/autotest_common.sh@1572 -- # return 0 00:04:08.022 17:48:49 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:08.022 17:48:49 -- common/autotest_common.sh@1580 -- # return 0 00:04:08.022 17:48:49 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:08.022 17:48:49 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:08.022 17:48:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:08.022 17:48:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:08.022 17:48:49 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:08.022 17:48:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.022 17:48:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.022 17:48:49 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:08.022 17:48:49 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:08.022 17:48:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.022 17:48:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.022 17:48:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.022 ************************************ 00:04:08.022 START TEST env 00:04:08.022 ************************************ 00:04:08.022 17:48:49 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:08.282 * Looking for test storage... 00:04:08.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:08.282 17:48:49 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:08.282 17:48:49 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:08.282 17:48:49 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:08.282 17:48:50 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:08.282 17:48:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.282 17:48:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.282 17:48:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.282 17:48:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.282 17:48:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.282 17:48:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.282 17:48:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.282 17:48:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.282 17:48:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.282 17:48:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.282 17:48:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.282 17:48:50 env -- scripts/common.sh@344 -- # case "$op" in 00:04:08.282 17:48:50 env -- scripts/common.sh@345 -- # : 1 00:04:08.282 17:48:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.282 17:48:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.282 17:48:50 env -- scripts/common.sh@365 -- # decimal 1 00:04:08.282 17:48:50 env -- scripts/common.sh@353 -- # local d=1 00:04:08.282 17:48:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.282 17:48:50 env -- scripts/common.sh@355 -- # echo 1 00:04:08.282 17:48:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.282 17:48:50 env -- scripts/common.sh@366 -- # decimal 2 00:04:08.282 17:48:50 env -- scripts/common.sh@353 -- # local d=2 00:04:08.282 17:48:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.282 17:48:50 env -- scripts/common.sh@355 -- # echo 2 00:04:08.282 17:48:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.282 17:48:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.282 17:48:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.282 17:48:50 env -- scripts/common.sh@368 -- # return 0 00:04:08.282 17:48:50 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.282 17:48:50 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:08.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.282 --rc genhtml_branch_coverage=1 00:04:08.282 --rc genhtml_function_coverage=1 00:04:08.282 --rc genhtml_legend=1 00:04:08.282 --rc geninfo_all_blocks=1 00:04:08.282 --rc geninfo_unexecuted_blocks=1 00:04:08.282 00:04:08.282 ' 00:04:08.282 17:48:50 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:08.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.282 --rc genhtml_branch_coverage=1 00:04:08.282 --rc genhtml_function_coverage=1 00:04:08.282 --rc genhtml_legend=1 00:04:08.282 --rc geninfo_all_blocks=1 00:04:08.282 --rc geninfo_unexecuted_blocks=1 00:04:08.282 00:04:08.282 ' 00:04:08.282 17:48:50 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:08.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.282 --rc genhtml_branch_coverage=1 00:04:08.282 --rc genhtml_function_coverage=1 00:04:08.282 --rc genhtml_legend=1 00:04:08.282 --rc geninfo_all_blocks=1 00:04:08.282 --rc geninfo_unexecuted_blocks=1 00:04:08.282 00:04:08.282 ' 00:04:08.282 17:48:50 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:08.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.282 --rc genhtml_branch_coverage=1 00:04:08.282 --rc genhtml_function_coverage=1 00:04:08.282 --rc genhtml_legend=1 00:04:08.282 --rc geninfo_all_blocks=1 00:04:08.282 --rc geninfo_unexecuted_blocks=1 00:04:08.282 00:04:08.282 ' 00:04:08.282 17:48:50 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:08.282 17:48:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.282 17:48:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.282 17:48:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.282 ************************************ 00:04:08.282 START TEST env_memory 00:04:08.282 ************************************ 00:04:08.282 17:48:50 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:08.282 00:04:08.282 00:04:08.282 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.282 http://cunit.sourceforge.net/ 00:04:08.282 00:04:08.282 00:04:08.282 Suite: memory 00:04:08.541 Test: alloc and free memory map ...[2024-11-26 17:48:50.176177] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:08.541 passed 00:04:08.541 Test: mem map translation ...[2024-11-26 17:48:50.227221] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:08.541 [2024-11-26 17:48:50.227284] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:08.541 [2024-11-26 17:48:50.227356] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:08.541 [2024-11-26 17:48:50.227379] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:08.541 passed 00:04:08.541 Test: mem map registration ...[2024-11-26 17:48:50.306248] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:08.541 [2024-11-26 17:48:50.306324] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:08.541 passed 00:04:08.801 Test: mem map adjacent registrations ...passed 00:04:08.801 00:04:08.801 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.801 suites 1 1 n/a 0 0 00:04:08.801 tests 4 4 4 0 0 00:04:08.801 asserts 152 152 152 0 n/a 00:04:08.801 00:04:08.801 Elapsed time = 0.279 seconds 00:04:08.801 00:04:08.801 real 0m0.331s 00:04:08.801 user 0m0.289s 00:04:08.801 sys 0m0.032s 00:04:08.801 17:48:50 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.801 17:48:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:08.801 ************************************ 00:04:08.801 END TEST env_memory 00:04:08.801 ************************************ 00:04:08.801 17:48:50 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:08.801 17:48:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.801 17:48:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.801 17:48:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.801 ************************************ 00:04:08.801 START TEST env_vtophys 00:04:08.801 ************************************ 00:04:08.801 17:48:50 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:08.801 EAL: lib.eal log level changed from notice to debug 00:04:08.801 EAL: Detected lcore 0 as core 0 on socket 0 00:04:08.801 EAL: Detected lcore 1 as core 0 on socket 0 00:04:08.801 EAL: Detected lcore 2 as core 0 on socket 0 00:04:08.801 EAL: Detected lcore 3 as core 0 on socket 0 00:04:08.801 EAL: Detected lcore 4 as core 0 on socket 0 00:04:08.801 EAL: Detected lcore 5 as core 0 on socket 0 00:04:08.801 EAL: Detected lcore 6 as core 0 on socket 0 00:04:08.801 EAL: Detected lcore 7 as core 0 on socket 0 00:04:08.801 EAL: Detected lcore 8 as core 0 on socket 0 00:04:08.801 EAL: Detected lcore 9 as core 0 on socket 0 00:04:08.801 EAL: Maximum logical cores by configuration: 128 00:04:08.801 EAL: Detected CPU lcores: 10 00:04:08.801 EAL: Detected NUMA nodes: 1 00:04:08.801 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:08.801 EAL: Detected shared linkage of DPDK 00:04:08.801 EAL: No shared files mode enabled, IPC will be disabled 00:04:08.801 EAL: Selected IOVA mode 'PA' 00:04:08.801 EAL: Probing VFIO support... 00:04:08.801 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:08.801 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:08.801 EAL: Ask a virtual area of 0x2e000 bytes 00:04:08.801 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:08.801 EAL: Setting up physically contiguous memory... 00:04:08.801 EAL: Setting maximum number of open files to 524288 00:04:08.801 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:08.801 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:08.801 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.801 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:08.801 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.801 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.801 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:08.801 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:08.801 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.801 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:08.801 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.801 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.801 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:08.801 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:08.801 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.801 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:08.801 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.801 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.801 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:08.801 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:08.801 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.801 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:08.801 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.801 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.801 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:08.801 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:08.801 EAL: Hugepages will be freed exactly as allocated. 00:04:08.801 EAL: No shared files mode enabled, IPC is disabled 00:04:08.801 EAL: No shared files mode enabled, IPC is disabled 00:04:09.060 EAL: TSC frequency is ~2290000 KHz 00:04:09.060 EAL: Main lcore 0 is ready (tid=7f6e83b3aa40;cpuset=[0]) 00:04:09.060 EAL: Trying to obtain current memory policy. 00:04:09.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.060 EAL: Restoring previous memory policy: 0 00:04:09.060 EAL: request: mp_malloc_sync 00:04:09.060 EAL: No shared files mode enabled, IPC is disabled 00:04:09.060 EAL: Heap on socket 0 was expanded by 2MB 00:04:09.060 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:09.060 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:09.060 EAL: Mem event callback 'spdk:(nil)' registered 00:04:09.060 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:09.060 00:04:09.060 00:04:09.060 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.060 http://cunit.sourceforge.net/ 00:04:09.060 00:04:09.060 00:04:09.060 Suite: components_suite 00:04:09.319 Test: vtophys_malloc_test ...passed 00:04:09.319 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:09.319 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.319 EAL: Restoring previous memory policy: 4 00:04:09.319 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.319 EAL: request: mp_malloc_sync 00:04:09.319 EAL: No shared files mode enabled, IPC is disabled 00:04:09.319 EAL: Heap on socket 0 was expanded by 4MB 00:04:09.319 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.319 EAL: request: mp_malloc_sync 00:04:09.319 EAL: No shared files mode enabled, IPC is disabled 00:04:09.319 EAL: Heap on socket 0 was shrunk by 4MB 00:04:09.319 EAL: Trying to obtain current memory policy. 00:04:09.319 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.319 EAL: Restoring previous memory policy: 4 00:04:09.319 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.319 EAL: request: mp_malloc_sync 00:04:09.319 EAL: No shared files mode enabled, IPC is disabled 00:04:09.319 EAL: Heap on socket 0 was expanded by 6MB 00:04:09.319 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.319 EAL: request: mp_malloc_sync 00:04:09.319 EAL: No shared files mode enabled, IPC is disabled 00:04:09.319 EAL: Heap on socket 0 was shrunk by 6MB 00:04:09.319 EAL: Trying to obtain current memory policy. 00:04:09.319 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.319 EAL: Restoring previous memory policy: 4 00:04:09.319 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.319 EAL: request: mp_malloc_sync 00:04:09.319 EAL: No shared files mode enabled, IPC is disabled 00:04:09.319 EAL: Heap on socket 0 was expanded by 10MB 00:04:09.319 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.319 EAL: request: mp_malloc_sync 00:04:09.319 EAL: No shared files mode enabled, IPC is disabled 00:04:09.319 EAL: Heap on socket 0 was shrunk by 10MB 00:04:09.319 EAL: Trying to obtain current memory policy. 00:04:09.319 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.319 EAL: Restoring previous memory policy: 4 00:04:09.319 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.319 EAL: request: mp_malloc_sync 00:04:09.319 EAL: No shared files mode enabled, IPC is disabled 00:04:09.319 EAL: Heap on socket 0 was expanded by 18MB 00:04:09.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.579 EAL: request: mp_malloc_sync 00:04:09.579 EAL: No shared files mode enabled, IPC is disabled 00:04:09.579 EAL: Heap on socket 0 was shrunk by 18MB 00:04:09.579 EAL: Trying to obtain current memory policy. 00:04:09.579 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.579 EAL: Restoring previous memory policy: 4 00:04:09.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.579 EAL: request: mp_malloc_sync 00:04:09.579 EAL: No shared files mode enabled, IPC is disabled 00:04:09.579 EAL: Heap on socket 0 was expanded by 34MB 00:04:09.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.579 EAL: request: mp_malloc_sync 00:04:09.579 EAL: No shared files mode enabled, IPC is disabled 00:04:09.579 EAL: Heap on socket 0 was shrunk by 34MB 00:04:09.579 EAL: Trying to obtain current memory policy. 00:04:09.579 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.579 EAL: Restoring previous memory policy: 4 00:04:09.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.579 EAL: request: mp_malloc_sync 00:04:09.579 EAL: No shared files mode enabled, IPC is disabled 00:04:09.579 EAL: Heap on socket 0 was expanded by 66MB 00:04:09.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.839 EAL: request: mp_malloc_sync 00:04:09.839 EAL: No shared files mode enabled, IPC is disabled 00:04:09.839 EAL: Heap on socket 0 was shrunk by 66MB 00:04:09.839 EAL: Trying to obtain current memory policy. 00:04:09.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.839 EAL: Restoring previous memory policy: 4 00:04:09.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.839 EAL: request: mp_malloc_sync 00:04:09.839 EAL: No shared files mode enabled, IPC is disabled 00:04:09.839 EAL: Heap on socket 0 was expanded by 130MB 00:04:10.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.099 EAL: request: mp_malloc_sync 00:04:10.099 EAL: No shared files mode enabled, IPC is disabled 00:04:10.099 EAL: Heap on socket 0 was shrunk by 130MB 00:04:10.358 EAL: Trying to obtain current memory policy. 00:04:10.358 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.358 EAL: Restoring previous memory policy: 4 00:04:10.358 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.358 EAL: request: mp_malloc_sync 00:04:10.358 EAL: No shared files mode enabled, IPC is disabled 00:04:10.358 EAL: Heap on socket 0 was expanded by 258MB 00:04:10.927 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.927 EAL: request: mp_malloc_sync 00:04:10.927 EAL: No shared files mode enabled, IPC is disabled 00:04:10.927 EAL: Heap on socket 0 was shrunk by 258MB 00:04:11.495 EAL: Trying to obtain current memory policy. 00:04:11.495 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.495 EAL: Restoring previous memory policy: 4 00:04:11.495 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.495 EAL: request: mp_malloc_sync 00:04:11.495 EAL: No shared files mode enabled, IPC is disabled 00:04:11.495 EAL: Heap on socket 0 was expanded by 514MB 00:04:12.435 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.694 EAL: request: mp_malloc_sync 00:04:12.694 EAL: No shared files mode enabled, IPC is disabled 00:04:12.694 EAL: Heap on socket 0 was shrunk by 514MB 00:04:13.633 EAL: Trying to obtain current memory policy. 00:04:13.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.633 EAL: Restoring previous memory policy: 4 00:04:13.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.633 EAL: request: mp_malloc_sync 00:04:13.633 EAL: No shared files mode enabled, IPC is disabled 00:04:13.633 EAL: Heap on socket 0 was expanded by 1026MB 00:04:15.585 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.844 EAL: request: mp_malloc_sync 00:04:15.844 EAL: No shared files mode enabled, IPC is disabled 00:04:15.844 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:17.752 passed 00:04:17.752 00:04:17.752 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.752 suites 1 1 n/a 0 0 00:04:17.752 tests 2 2 2 0 0 00:04:17.752 asserts 5705 5705 5705 0 n/a 00:04:17.752 00:04:17.752 Elapsed time = 8.595 seconds 00:04:17.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.752 EAL: request: mp_malloc_sync 00:04:17.752 EAL: No shared files mode enabled, IPC is disabled 00:04:17.752 EAL: Heap on socket 0 was shrunk by 2MB 00:04:17.752 EAL: No shared files mode enabled, IPC is disabled 00:04:17.752 EAL: No shared files mode enabled, IPC is disabled 00:04:17.752 EAL: No shared files mode enabled, IPC is disabled 00:04:17.752 00:04:17.752 real 0m8.932s 00:04:17.752 user 0m7.922s 00:04:17.752 sys 0m0.848s 00:04:17.752 17:48:59 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.752 17:48:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:17.752 ************************************ 00:04:17.752 END TEST env_vtophys 00:04:17.752 ************************************ 00:04:17.752 17:48:59 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:17.752 17:48:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.752 17:48:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.752 17:48:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.752 ************************************ 00:04:17.752 START TEST env_pci 00:04:17.752 ************************************ 00:04:17.752 17:48:59 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:17.752 00:04:17.752 00:04:17.753 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.753 http://cunit.sourceforge.net/ 00:04:17.753 00:04:17.753 00:04:17.753 Suite: pci 00:04:17.753 Test: pci_hook ...[2024-11-26 17:48:59.530963] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56820 has claimed it 00:04:17.753 passed 00:04:17.753 00:04:17.753 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.753 suites 1 1 n/a 0 0 00:04:17.753 tests 1 1 1 0 0 00:04:17.753 asserts 25 25 25 0 n/a 00:04:17.753 00:04:17.753 Elapsed time = 0.006 seconds 00:04:17.753 EAL: Cannot find device (10000:00:01.0) 00:04:17.753 EAL: Failed to attach device on primary process 00:04:17.753 00:04:17.753 real 0m0.101s 00:04:17.753 user 0m0.047s 00:04:17.753 sys 0m0.053s 00:04:17.753 17:48:59 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.753 17:48:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:17.753 ************************************ 00:04:17.753 END TEST env_pci 00:04:17.753 ************************************ 00:04:18.012 17:48:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:18.012 17:48:59 env -- env/env.sh@15 -- # uname 00:04:18.012 17:48:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:18.012 17:48:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:18.012 17:48:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:18.012 17:48:59 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:18.012 17:48:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.012 17:48:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.012 ************************************ 00:04:18.012 START TEST env_dpdk_post_init 00:04:18.012 ************************************ 00:04:18.012 17:48:59 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:18.012 EAL: Detected CPU lcores: 10 00:04:18.012 EAL: Detected NUMA nodes: 1 00:04:18.012 EAL: Detected shared linkage of DPDK 00:04:18.013 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:18.013 EAL: Selected IOVA mode 'PA' 00:04:18.013 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:18.272 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:18.272 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:18.272 Starting DPDK initialization... 00:04:18.272 Starting SPDK post initialization... 00:04:18.272 SPDK NVMe probe 00:04:18.272 Attaching to 0000:00:10.0 00:04:18.272 Attaching to 0000:00:11.0 00:04:18.272 Attached to 0000:00:10.0 00:04:18.272 Attached to 0000:00:11.0 00:04:18.272 Cleaning up... 00:04:18.272 00:04:18.272 real 0m0.296s 00:04:18.272 user 0m0.103s 00:04:18.272 sys 0m0.093s 00:04:18.272 17:48:59 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.272 17:48:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:18.272 ************************************ 00:04:18.272 END TEST env_dpdk_post_init 00:04:18.272 ************************************ 00:04:18.272 17:49:00 env -- env/env.sh@26 -- # uname 00:04:18.272 17:49:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:18.272 17:49:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:18.272 17:49:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.272 17:49:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.272 17:49:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.272 ************************************ 00:04:18.272 START TEST env_mem_callbacks 00:04:18.272 ************************************ 00:04:18.272 17:49:00 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:18.272 EAL: Detected CPU lcores: 10 00:04:18.272 EAL: Detected NUMA nodes: 1 00:04:18.272 EAL: Detected shared linkage of DPDK 00:04:18.272 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:18.272 EAL: Selected IOVA mode 'PA' 00:04:18.532 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:18.532 00:04:18.532 00:04:18.532 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.532 http://cunit.sourceforge.net/ 00:04:18.532 00:04:18.532 00:04:18.532 Suite: memory 00:04:18.532 Test: test ... 00:04:18.532 register 0x200000200000 2097152 00:04:18.532 malloc 3145728 00:04:18.532 register 0x200000400000 4194304 00:04:18.532 buf 0x2000004fffc0 len 3145728 PASSED 00:04:18.532 malloc 64 00:04:18.532 buf 0x2000004ffec0 len 64 PASSED 00:04:18.532 malloc 4194304 00:04:18.532 register 0x200000800000 6291456 00:04:18.532 buf 0x2000009fffc0 len 4194304 PASSED 00:04:18.532 free 0x2000004fffc0 3145728 00:04:18.532 free 0x2000004ffec0 64 00:04:18.532 unregister 0x200000400000 4194304 PASSED 00:04:18.532 free 0x2000009fffc0 4194304 00:04:18.532 unregister 0x200000800000 6291456 PASSED 00:04:18.532 malloc 8388608 00:04:18.532 register 0x200000400000 10485760 00:04:18.532 buf 0x2000005fffc0 len 8388608 PASSED 00:04:18.532 free 0x2000005fffc0 8388608 00:04:18.532 unregister 0x200000400000 10485760 PASSED 00:04:18.532 passed 00:04:18.532 00:04:18.532 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.532 suites 1 1 n/a 0 0 00:04:18.532 tests 1 1 1 0 0 00:04:18.532 asserts 15 15 15 0 n/a 00:04:18.532 00:04:18.532 Elapsed time = 0.096 seconds 00:04:18.532 00:04:18.532 real 0m0.299s 00:04:18.532 user 0m0.123s 00:04:18.532 sys 0m0.074s 00:04:18.532 17:49:00 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.532 17:49:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:18.532 ************************************ 00:04:18.532 END TEST env_mem_callbacks 00:04:18.532 ************************************ 00:04:18.532 00:04:18.532 real 0m10.530s 00:04:18.532 user 0m8.714s 00:04:18.532 sys 0m1.452s 00:04:18.532 17:49:00 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.532 17:49:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.532 ************************************ 00:04:18.532 END TEST env 00:04:18.532 ************************************ 00:04:18.792 17:49:00 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:18.792 17:49:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.792 17:49:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.792 17:49:00 -- common/autotest_common.sh@10 -- # set +x 00:04:18.792 ************************************ 00:04:18.792 START TEST rpc 00:04:18.792 ************************************ 00:04:18.792 17:49:00 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:18.792 * Looking for test storage... 00:04:18.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:18.792 17:49:00 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:18.792 17:49:00 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:18.792 17:49:00 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.052 17:49:00 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.052 17:49:00 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.052 17:49:00 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.052 17:49:00 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.052 17:49:00 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.052 17:49:00 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.052 17:49:00 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.052 17:49:00 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.052 17:49:00 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.052 17:49:00 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.052 17:49:00 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.052 17:49:00 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.052 17:49:00 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:19.052 17:49:00 rpc -- scripts/common.sh@345 -- # : 1 00:04:19.052 17:49:00 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.052 17:49:00 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.052 17:49:00 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:19.052 17:49:00 rpc -- scripts/common.sh@353 -- # local d=1 00:04:19.052 17:49:00 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.052 17:49:00 rpc -- scripts/common.sh@355 -- # echo 1 00:04:19.052 17:49:00 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.052 17:49:00 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:19.052 17:49:00 rpc -- scripts/common.sh@353 -- # local d=2 00:04:19.052 17:49:00 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.052 17:49:00 rpc -- scripts/common.sh@355 -- # echo 2 00:04:19.052 17:49:00 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.052 17:49:00 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.052 17:49:00 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.052 17:49:00 rpc -- scripts/common.sh@368 -- # return 0 00:04:19.052 17:49:00 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.052 17:49:00 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.052 --rc genhtml_branch_coverage=1 00:04:19.052 --rc genhtml_function_coverage=1 00:04:19.052 --rc genhtml_legend=1 00:04:19.052 --rc geninfo_all_blocks=1 00:04:19.052 --rc geninfo_unexecuted_blocks=1 00:04:19.052 00:04:19.052 ' 00:04:19.052 17:49:00 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.052 --rc genhtml_branch_coverage=1 00:04:19.052 --rc genhtml_function_coverage=1 00:04:19.052 --rc genhtml_legend=1 00:04:19.052 --rc geninfo_all_blocks=1 00:04:19.052 --rc geninfo_unexecuted_blocks=1 00:04:19.052 00:04:19.052 ' 00:04:19.052 17:49:00 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.052 --rc genhtml_branch_coverage=1 00:04:19.052 --rc genhtml_function_coverage=1 00:04:19.052 --rc genhtml_legend=1 00:04:19.052 --rc geninfo_all_blocks=1 00:04:19.052 --rc geninfo_unexecuted_blocks=1 00:04:19.052 00:04:19.052 ' 00:04:19.052 17:49:00 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.052 --rc genhtml_branch_coverage=1 00:04:19.052 --rc genhtml_function_coverage=1 00:04:19.052 --rc genhtml_legend=1 00:04:19.052 --rc geninfo_all_blocks=1 00:04:19.052 --rc geninfo_unexecuted_blocks=1 00:04:19.052 00:04:19.052 ' 00:04:19.052 17:49:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56947 00:04:19.052 17:49:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.052 17:49:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56947 00:04:19.052 17:49:00 rpc -- common/autotest_common.sh@835 -- # '[' -z 56947 ']' 00:04:19.052 17:49:00 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:19.052 17:49:00 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.052 17:49:00 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.052 17:49:00 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.052 17:49:00 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.052 17:49:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.052 [2024-11-26 17:49:00.809746] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:04:19.052 [2024-11-26 17:49:00.809881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56947 ] 00:04:19.312 [2024-11-26 17:49:00.976119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.312 [2024-11-26 17:49:01.114918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:19.312 [2024-11-26 17:49:01.114987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56947' to capture a snapshot of events at runtime. 00:04:19.312 [2024-11-26 17:49:01.114999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:19.312 [2024-11-26 17:49:01.115010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:19.312 [2024-11-26 17:49:01.115029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56947 for offline analysis/debug. 00:04:19.312 [2024-11-26 17:49:01.116535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.249 17:49:02 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.249 17:49:02 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:20.249 17:49:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.249 17:49:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.249 17:49:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:20.249 17:49:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:20.249 17:49:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.249 17:49:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.249 17:49:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.249 ************************************ 00:04:20.249 START TEST rpc_integrity 00:04:20.249 ************************************ 00:04:20.249 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:20.508 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:20.508 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.508 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.508 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.508 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:20.508 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:20.508 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:20.508 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:20.508 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.508 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.508 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.508 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:20.508 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:20.508 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.508 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.508 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.508 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:20.508 { 00:04:20.508 "name": "Malloc0", 00:04:20.508 "aliases": [ 00:04:20.508 "242cc120-7313-4547-975f-86d571d3d3e8" 00:04:20.508 ], 00:04:20.508 "product_name": "Malloc disk", 00:04:20.508 "block_size": 512, 00:04:20.508 "num_blocks": 16384, 00:04:20.508 "uuid": "242cc120-7313-4547-975f-86d571d3d3e8", 00:04:20.508 "assigned_rate_limits": { 00:04:20.508 "rw_ios_per_sec": 0, 00:04:20.508 "rw_mbytes_per_sec": 0, 00:04:20.508 "r_mbytes_per_sec": 0, 00:04:20.508 "w_mbytes_per_sec": 0 00:04:20.508 }, 00:04:20.508 "claimed": false, 00:04:20.508 "zoned": false, 00:04:20.508 "supported_io_types": { 00:04:20.508 "read": true, 00:04:20.508 "write": true, 00:04:20.508 "unmap": true, 00:04:20.508 "flush": true, 00:04:20.508 "reset": true, 00:04:20.508 "nvme_admin": false, 00:04:20.508 "nvme_io": false, 00:04:20.508 "nvme_io_md": false, 00:04:20.508 "write_zeroes": true, 00:04:20.509 "zcopy": true, 00:04:20.509 "get_zone_info": false, 00:04:20.509 "zone_management": false, 00:04:20.509 "zone_append": false, 00:04:20.509 "compare": false, 00:04:20.509 "compare_and_write": false, 00:04:20.509 "abort": true, 00:04:20.509 "seek_hole": false, 00:04:20.509 "seek_data": false, 00:04:20.509 "copy": true, 00:04:20.509 "nvme_iov_md": false 00:04:20.509 }, 00:04:20.509 "memory_domains": [ 00:04:20.509 { 00:04:20.509 "dma_device_id": "system", 00:04:20.509 "dma_device_type": 1 00:04:20.509 }, 00:04:20.509 { 00:04:20.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.509 "dma_device_type": 2 00:04:20.509 } 00:04:20.509 ], 00:04:20.509 "driver_specific": {} 00:04:20.509 } 00:04:20.509 ]' 00:04:20.509 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:20.509 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:20.509 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:20.509 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.509 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.509 [2024-11-26 17:49:02.268565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:20.509 [2024-11-26 17:49:02.268639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:20.509 [2024-11-26 17:49:02.268666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:20.509 [2024-11-26 17:49:02.268681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:20.509 [2024-11-26 17:49:02.271141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:20.509 [2024-11-26 17:49:02.271185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:20.509 Passthru0 00:04:20.509 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.509 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:20.509 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.509 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.509 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.509 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:20.509 { 00:04:20.509 "name": "Malloc0", 00:04:20.509 "aliases": [ 00:04:20.509 "242cc120-7313-4547-975f-86d571d3d3e8" 00:04:20.509 ], 00:04:20.509 "product_name": "Malloc disk", 00:04:20.509 "block_size": 512, 00:04:20.509 "num_blocks": 16384, 00:04:20.509 "uuid": "242cc120-7313-4547-975f-86d571d3d3e8", 00:04:20.509 "assigned_rate_limits": { 00:04:20.509 "rw_ios_per_sec": 0, 00:04:20.509 "rw_mbytes_per_sec": 0, 00:04:20.509 "r_mbytes_per_sec": 0, 00:04:20.509 "w_mbytes_per_sec": 0 00:04:20.509 }, 00:04:20.509 "claimed": true, 00:04:20.509 "claim_type": "exclusive_write", 00:04:20.509 "zoned": false, 00:04:20.509 "supported_io_types": { 00:04:20.509 "read": true, 00:04:20.509 "write": true, 00:04:20.509 "unmap": true, 00:04:20.509 "flush": true, 00:04:20.509 "reset": true, 00:04:20.509 "nvme_admin": false, 00:04:20.509 "nvme_io": false, 00:04:20.509 "nvme_io_md": false, 00:04:20.509 "write_zeroes": true, 00:04:20.509 "zcopy": true, 00:04:20.509 "get_zone_info": false, 00:04:20.509 "zone_management": false, 00:04:20.509 "zone_append": false, 00:04:20.509 "compare": false, 00:04:20.509 "compare_and_write": false, 00:04:20.509 "abort": true, 00:04:20.509 "seek_hole": false, 00:04:20.509 "seek_data": false, 00:04:20.509 "copy": true, 00:04:20.509 "nvme_iov_md": false 00:04:20.509 }, 00:04:20.509 "memory_domains": [ 00:04:20.509 { 00:04:20.509 "dma_device_id": "system", 00:04:20.509 "dma_device_type": 1 00:04:20.509 }, 00:04:20.509 { 00:04:20.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.509 "dma_device_type": 2 00:04:20.509 } 00:04:20.509 ], 00:04:20.509 "driver_specific": {} 00:04:20.509 }, 00:04:20.509 { 00:04:20.509 "name": "Passthru0", 00:04:20.509 "aliases": [ 00:04:20.509 "beb8235d-7b64-5f2e-8a85-e59b7da7f327" 00:04:20.509 ], 00:04:20.509 "product_name": "passthru", 00:04:20.509 "block_size": 512, 00:04:20.509 "num_blocks": 16384, 00:04:20.509 "uuid": "beb8235d-7b64-5f2e-8a85-e59b7da7f327", 00:04:20.509 "assigned_rate_limits": { 00:04:20.509 "rw_ios_per_sec": 0, 00:04:20.509 "rw_mbytes_per_sec": 0, 00:04:20.509 "r_mbytes_per_sec": 0, 00:04:20.509 "w_mbytes_per_sec": 0 00:04:20.509 }, 00:04:20.509 "claimed": false, 00:04:20.509 "zoned": false, 00:04:20.509 "supported_io_types": { 00:04:20.509 "read": true, 00:04:20.509 "write": true, 00:04:20.509 "unmap": true, 00:04:20.509 "flush": true, 00:04:20.509 "reset": true, 00:04:20.509 "nvme_admin": false, 00:04:20.509 "nvme_io": false, 00:04:20.509 "nvme_io_md": false, 00:04:20.509 "write_zeroes": true, 00:04:20.509 "zcopy": true, 00:04:20.509 "get_zone_info": false, 00:04:20.509 "zone_management": false, 00:04:20.509 "zone_append": false, 00:04:20.509 "compare": false, 00:04:20.509 "compare_and_write": false, 00:04:20.509 "abort": true, 00:04:20.509 "seek_hole": false, 00:04:20.509 "seek_data": false, 00:04:20.509 "copy": true, 00:04:20.509 "nvme_iov_md": false 00:04:20.509 }, 00:04:20.509 "memory_domains": [ 00:04:20.509 { 00:04:20.509 "dma_device_id": "system", 00:04:20.509 "dma_device_type": 1 00:04:20.509 }, 00:04:20.509 { 00:04:20.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.509 "dma_device_type": 2 00:04:20.509 } 00:04:20.509 ], 00:04:20.509 "driver_specific": { 00:04:20.509 "passthru": { 00:04:20.509 "name": "Passthru0", 00:04:20.509 "base_bdev_name": "Malloc0" 00:04:20.509 } 00:04:20.509 } 00:04:20.509 } 00:04:20.509 ]' 00:04:20.509 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:20.509 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:20.509 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:20.509 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.509 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.509 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.509 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:20.509 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.509 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.769 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.769 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:20.769 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.769 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.769 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.769 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:20.769 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:20.769 17:49:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:20.769 00:04:20.769 real 0m0.349s 00:04:20.769 user 0m0.199s 00:04:20.769 sys 0m0.051s 00:04:20.769 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.769 17:49:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.769 ************************************ 00:04:20.769 END TEST rpc_integrity 00:04:20.769 ************************************ 00:04:20.769 17:49:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:20.769 17:49:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.769 17:49:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.769 17:49:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.769 ************************************ 00:04:20.769 START TEST rpc_plugins 00:04:20.769 ************************************ 00:04:20.769 17:49:02 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:20.769 17:49:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:20.769 17:49:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.769 17:49:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.769 17:49:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.769 17:49:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:20.769 17:49:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:20.769 17:49:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.769 17:49:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.769 17:49:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.769 17:49:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:20.769 { 00:04:20.769 "name": "Malloc1", 00:04:20.769 "aliases": [ 00:04:20.769 "2ab0dd25-d738-49a2-8f5a-3ee2d3ba5451" 00:04:20.769 ], 00:04:20.769 "product_name": "Malloc disk", 00:04:20.769 "block_size": 4096, 00:04:20.769 "num_blocks": 256, 00:04:20.769 "uuid": "2ab0dd25-d738-49a2-8f5a-3ee2d3ba5451", 00:04:20.769 "assigned_rate_limits": { 00:04:20.769 "rw_ios_per_sec": 0, 00:04:20.769 "rw_mbytes_per_sec": 0, 00:04:20.769 "r_mbytes_per_sec": 0, 00:04:20.769 "w_mbytes_per_sec": 0 00:04:20.769 }, 00:04:20.769 "claimed": false, 00:04:20.769 "zoned": false, 00:04:20.769 "supported_io_types": { 00:04:20.769 "read": true, 00:04:20.769 "write": true, 00:04:20.769 "unmap": true, 00:04:20.769 "flush": true, 00:04:20.769 "reset": true, 00:04:20.769 "nvme_admin": false, 00:04:20.769 "nvme_io": false, 00:04:20.769 "nvme_io_md": false, 00:04:20.769 "write_zeroes": true, 00:04:20.769 "zcopy": true, 00:04:20.769 "get_zone_info": false, 00:04:20.769 "zone_management": false, 00:04:20.769 "zone_append": false, 00:04:20.769 "compare": false, 00:04:20.769 "compare_and_write": false, 00:04:20.769 "abort": true, 00:04:20.769 "seek_hole": false, 00:04:20.769 "seek_data": false, 00:04:20.769 "copy": true, 00:04:20.769 "nvme_iov_md": false 00:04:20.769 }, 00:04:20.769 "memory_domains": [ 00:04:20.769 { 00:04:20.769 "dma_device_id": "system", 00:04:20.769 "dma_device_type": 1 00:04:20.769 }, 00:04:20.769 { 00:04:20.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.769 "dma_device_type": 2 00:04:20.769 } 00:04:20.769 ], 00:04:20.769 "driver_specific": {} 00:04:20.769 } 00:04:20.769 ]' 00:04:20.769 17:49:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:20.769 17:49:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:20.769 17:49:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:20.769 17:49:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.769 17:49:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.769 17:49:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.769 17:49:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:20.769 17:49:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.769 17:49:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.029 17:49:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.029 17:49:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:21.029 17:49:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:21.029 17:49:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:21.029 00:04:21.029 real 0m0.168s 00:04:21.029 user 0m0.101s 00:04:21.029 sys 0m0.029s 00:04:21.029 17:49:02 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.029 17:49:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.029 ************************************ 00:04:21.029 END TEST rpc_plugins 00:04:21.029 ************************************ 00:04:21.029 17:49:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:21.029 17:49:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.029 17:49:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.029 17:49:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.029 ************************************ 00:04:21.029 START TEST rpc_trace_cmd_test 00:04:21.029 ************************************ 00:04:21.029 17:49:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:21.029 17:49:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:21.029 17:49:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:21.029 17:49:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.029 17:49:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:21.029 17:49:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.029 17:49:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:21.029 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56947", 00:04:21.029 "tpoint_group_mask": "0x8", 00:04:21.029 "iscsi_conn": { 00:04:21.029 "mask": "0x2", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 }, 00:04:21.029 "scsi": { 00:04:21.029 "mask": "0x4", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 }, 00:04:21.029 "bdev": { 00:04:21.029 "mask": "0x8", 00:04:21.029 "tpoint_mask": "0xffffffffffffffff" 00:04:21.029 }, 00:04:21.029 "nvmf_rdma": { 00:04:21.029 "mask": "0x10", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 }, 00:04:21.029 "nvmf_tcp": { 00:04:21.029 "mask": "0x20", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 }, 00:04:21.029 "ftl": { 00:04:21.029 "mask": "0x40", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 }, 00:04:21.029 "blobfs": { 00:04:21.029 "mask": "0x80", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 }, 00:04:21.029 "dsa": { 00:04:21.029 "mask": "0x200", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 }, 00:04:21.029 "thread": { 00:04:21.029 "mask": "0x400", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 }, 00:04:21.029 "nvme_pcie": { 00:04:21.029 "mask": "0x800", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 }, 00:04:21.029 "iaa": { 00:04:21.029 "mask": "0x1000", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 }, 00:04:21.029 "nvme_tcp": { 00:04:21.029 "mask": "0x2000", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 }, 00:04:21.029 "bdev_nvme": { 00:04:21.029 "mask": "0x4000", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 }, 00:04:21.029 "sock": { 00:04:21.029 "mask": "0x8000", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 }, 00:04:21.029 "blob": { 00:04:21.029 "mask": "0x10000", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 }, 00:04:21.029 "bdev_raid": { 00:04:21.029 "mask": "0x20000", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 }, 00:04:21.029 "scheduler": { 00:04:21.029 "mask": "0x40000", 00:04:21.029 "tpoint_mask": "0x0" 00:04:21.029 } 00:04:21.029 }' 00:04:21.029 17:49:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:21.029 17:49:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:21.029 17:49:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:21.029 17:49:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:21.029 17:49:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:21.289 17:49:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:21.289 17:49:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:21.289 17:49:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:21.289 17:49:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:21.289 17:49:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:21.289 00:04:21.289 real 0m0.257s 00:04:21.289 user 0m0.203s 00:04:21.289 sys 0m0.045s 00:04:21.289 17:49:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.289 17:49:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:21.289 ************************************ 00:04:21.289 END TEST rpc_trace_cmd_test 00:04:21.289 ************************************ 00:04:21.289 17:49:03 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:21.289 17:49:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:21.289 17:49:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:21.289 17:49:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.289 17:49:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.289 17:49:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.289 ************************************ 00:04:21.289 START TEST rpc_daemon_integrity 00:04:21.289 ************************************ 00:04:21.289 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:21.289 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:21.289 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.289 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.289 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.289 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:21.289 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:21.289 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:21.289 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:21.289 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.289 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:21.550 { 00:04:21.550 "name": "Malloc2", 00:04:21.550 "aliases": [ 00:04:21.550 "e4e5dc86-dac0-4f42-907b-79e78e6dcc25" 00:04:21.550 ], 00:04:21.550 "product_name": "Malloc disk", 00:04:21.550 "block_size": 512, 00:04:21.550 "num_blocks": 16384, 00:04:21.550 "uuid": "e4e5dc86-dac0-4f42-907b-79e78e6dcc25", 00:04:21.550 "assigned_rate_limits": { 00:04:21.550 "rw_ios_per_sec": 0, 00:04:21.550 "rw_mbytes_per_sec": 0, 00:04:21.550 "r_mbytes_per_sec": 0, 00:04:21.550 "w_mbytes_per_sec": 0 00:04:21.550 }, 00:04:21.550 "claimed": false, 00:04:21.550 "zoned": false, 00:04:21.550 "supported_io_types": { 00:04:21.550 "read": true, 00:04:21.550 "write": true, 00:04:21.550 "unmap": true, 00:04:21.550 "flush": true, 00:04:21.550 "reset": true, 00:04:21.550 "nvme_admin": false, 00:04:21.550 "nvme_io": false, 00:04:21.550 "nvme_io_md": false, 00:04:21.550 "write_zeroes": true, 00:04:21.550 "zcopy": true, 00:04:21.550 "get_zone_info": false, 00:04:21.550 "zone_management": false, 00:04:21.550 "zone_append": false, 00:04:21.550 "compare": false, 00:04:21.550 "compare_and_write": false, 00:04:21.550 "abort": true, 00:04:21.550 "seek_hole": false, 00:04:21.550 "seek_data": false, 00:04:21.550 "copy": true, 00:04:21.550 "nvme_iov_md": false 00:04:21.550 }, 00:04:21.550 "memory_domains": [ 00:04:21.550 { 00:04:21.550 "dma_device_id": "system", 00:04:21.550 "dma_device_type": 1 00:04:21.550 }, 00:04:21.550 { 00:04:21.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.550 "dma_device_type": 2 00:04:21.550 } 00:04:21.550 ], 00:04:21.550 "driver_specific": {} 00:04:21.550 } 00:04:21.550 ]' 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.550 [2024-11-26 17:49:03.234796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:21.550 [2024-11-26 17:49:03.234889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:21.550 [2024-11-26 17:49:03.234915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:21.550 [2024-11-26 17:49:03.234931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:21.550 [2024-11-26 17:49:03.237478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:21.550 [2024-11-26 17:49:03.237523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:21.550 Passthru0 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.550 { 00:04:21.550 "name": "Malloc2", 00:04:21.550 "aliases": [ 00:04:21.550 "e4e5dc86-dac0-4f42-907b-79e78e6dcc25" 00:04:21.550 ], 00:04:21.550 "product_name": "Malloc disk", 00:04:21.550 "block_size": 512, 00:04:21.550 "num_blocks": 16384, 00:04:21.550 "uuid": "e4e5dc86-dac0-4f42-907b-79e78e6dcc25", 00:04:21.550 "assigned_rate_limits": { 00:04:21.550 "rw_ios_per_sec": 0, 00:04:21.550 "rw_mbytes_per_sec": 0, 00:04:21.550 "r_mbytes_per_sec": 0, 00:04:21.550 "w_mbytes_per_sec": 0 00:04:21.550 }, 00:04:21.550 "claimed": true, 00:04:21.550 "claim_type": "exclusive_write", 00:04:21.550 "zoned": false, 00:04:21.550 "supported_io_types": { 00:04:21.550 "read": true, 00:04:21.550 "write": true, 00:04:21.550 "unmap": true, 00:04:21.550 "flush": true, 00:04:21.550 "reset": true, 00:04:21.550 "nvme_admin": false, 00:04:21.550 "nvme_io": false, 00:04:21.550 "nvme_io_md": false, 00:04:21.550 "write_zeroes": true, 00:04:21.550 "zcopy": true, 00:04:21.550 "get_zone_info": false, 00:04:21.550 "zone_management": false, 00:04:21.550 "zone_append": false, 00:04:21.550 "compare": false, 00:04:21.550 "compare_and_write": false, 00:04:21.550 "abort": true, 00:04:21.550 "seek_hole": false, 00:04:21.550 "seek_data": false, 00:04:21.550 "copy": true, 00:04:21.550 "nvme_iov_md": false 00:04:21.550 }, 00:04:21.550 "memory_domains": [ 00:04:21.550 { 00:04:21.550 "dma_device_id": "system", 00:04:21.550 "dma_device_type": 1 00:04:21.550 }, 00:04:21.550 { 00:04:21.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.550 "dma_device_type": 2 00:04:21.550 } 00:04:21.550 ], 00:04:21.550 "driver_specific": {} 00:04:21.550 }, 00:04:21.550 { 00:04:21.550 "name": "Passthru0", 00:04:21.550 "aliases": [ 00:04:21.550 "e30d6d28-f9e4-5102-9a67-393b5be5462a" 00:04:21.550 ], 00:04:21.550 "product_name": "passthru", 00:04:21.550 "block_size": 512, 00:04:21.550 "num_blocks": 16384, 00:04:21.550 "uuid": "e30d6d28-f9e4-5102-9a67-393b5be5462a", 00:04:21.550 "assigned_rate_limits": { 00:04:21.550 "rw_ios_per_sec": 0, 00:04:21.550 "rw_mbytes_per_sec": 0, 00:04:21.550 "r_mbytes_per_sec": 0, 00:04:21.550 "w_mbytes_per_sec": 0 00:04:21.550 }, 00:04:21.550 "claimed": false, 00:04:21.550 "zoned": false, 00:04:21.550 "supported_io_types": { 00:04:21.550 "read": true, 00:04:21.550 "write": true, 00:04:21.550 "unmap": true, 00:04:21.550 "flush": true, 00:04:21.550 "reset": true, 00:04:21.550 "nvme_admin": false, 00:04:21.550 "nvme_io": false, 00:04:21.550 "nvme_io_md": false, 00:04:21.550 "write_zeroes": true, 00:04:21.550 "zcopy": true, 00:04:21.550 "get_zone_info": false, 00:04:21.550 "zone_management": false, 00:04:21.550 "zone_append": false, 00:04:21.550 "compare": false, 00:04:21.550 "compare_and_write": false, 00:04:21.550 "abort": true, 00:04:21.550 "seek_hole": false, 00:04:21.550 "seek_data": false, 00:04:21.550 "copy": true, 00:04:21.550 "nvme_iov_md": false 00:04:21.550 }, 00:04:21.550 "memory_domains": [ 00:04:21.550 { 00:04:21.550 "dma_device_id": "system", 00:04:21.550 "dma_device_type": 1 00:04:21.550 }, 00:04:21.550 { 00:04:21.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.550 "dma_device_type": 2 00:04:21.550 } 00:04:21.550 ], 00:04:21.550 "driver_specific": { 00:04:21.550 "passthru": { 00:04:21.550 "name": "Passthru0", 00:04:21.550 "base_bdev_name": "Malloc2" 00:04:21.550 } 00:04:21.550 } 00:04:21.550 } 00:04:21.550 ]' 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:21.550 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:21.810 17:49:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:21.810 00:04:21.810 real 0m0.361s 00:04:21.810 user 0m0.197s 00:04:21.810 sys 0m0.060s 00:04:21.810 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.810 17:49:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.810 ************************************ 00:04:21.810 END TEST rpc_daemon_integrity 00:04:21.810 ************************************ 00:04:21.810 17:49:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:21.810 17:49:03 rpc -- rpc/rpc.sh@84 -- # killprocess 56947 00:04:21.810 17:49:03 rpc -- common/autotest_common.sh@954 -- # '[' -z 56947 ']' 00:04:21.810 17:49:03 rpc -- common/autotest_common.sh@958 -- # kill -0 56947 00:04:21.810 17:49:03 rpc -- common/autotest_common.sh@959 -- # uname 00:04:21.810 17:49:03 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.810 17:49:03 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56947 00:04:21.810 17:49:03 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.810 17:49:03 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.810 killing process with pid 56947 00:04:21.810 17:49:03 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56947' 00:04:21.810 17:49:03 rpc -- common/autotest_common.sh@973 -- # kill 56947 00:04:21.810 17:49:03 rpc -- common/autotest_common.sh@978 -- # wait 56947 00:04:24.347 00:04:24.347 real 0m5.569s 00:04:24.347 user 0m6.216s 00:04:24.347 sys 0m0.991s 00:04:24.347 17:49:06 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.347 17:49:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.347 ************************************ 00:04:24.347 END TEST rpc 00:04:24.347 ************************************ 00:04:24.347 17:49:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:24.347 17:49:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.347 17:49:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.347 17:49:06 -- common/autotest_common.sh@10 -- # set +x 00:04:24.347 ************************************ 00:04:24.347 START TEST skip_rpc 00:04:24.347 ************************************ 00:04:24.347 17:49:06 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:24.347 * Looking for test storage... 00:04:24.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:24.607 17:49:06 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:24.607 17:49:06 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:24.607 17:49:06 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:24.607 17:49:06 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.607 17:49:06 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:24.607 17:49:06 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.607 17:49:06 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:24.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.607 --rc genhtml_branch_coverage=1 00:04:24.607 --rc genhtml_function_coverage=1 00:04:24.607 --rc genhtml_legend=1 00:04:24.607 --rc geninfo_all_blocks=1 00:04:24.607 --rc geninfo_unexecuted_blocks=1 00:04:24.607 00:04:24.607 ' 00:04:24.607 17:49:06 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:24.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.607 --rc genhtml_branch_coverage=1 00:04:24.607 --rc genhtml_function_coverage=1 00:04:24.607 --rc genhtml_legend=1 00:04:24.607 --rc geninfo_all_blocks=1 00:04:24.607 --rc geninfo_unexecuted_blocks=1 00:04:24.607 00:04:24.607 ' 00:04:24.607 17:49:06 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:24.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.607 --rc genhtml_branch_coverage=1 00:04:24.607 --rc genhtml_function_coverage=1 00:04:24.607 --rc genhtml_legend=1 00:04:24.607 --rc geninfo_all_blocks=1 00:04:24.607 --rc geninfo_unexecuted_blocks=1 00:04:24.607 00:04:24.607 ' 00:04:24.608 17:49:06 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:24.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.608 --rc genhtml_branch_coverage=1 00:04:24.608 --rc genhtml_function_coverage=1 00:04:24.608 --rc genhtml_legend=1 00:04:24.608 --rc geninfo_all_blocks=1 00:04:24.608 --rc geninfo_unexecuted_blocks=1 00:04:24.608 00:04:24.608 ' 00:04:24.608 17:49:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:24.608 17:49:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:24.608 17:49:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:24.608 17:49:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.608 17:49:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.608 17:49:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.608 ************************************ 00:04:24.608 START TEST skip_rpc 00:04:24.608 ************************************ 00:04:24.608 17:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:24.608 17:49:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57187 00:04:24.608 17:49:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:24.608 17:49:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.608 17:49:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:24.608 [2024-11-26 17:49:06.452924] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:04:24.608 [2024-11-26 17:49:06.453081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57187 ] 00:04:24.896 [2024-11-26 17:49:06.629309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.896 [2024-11-26 17:49:06.751875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.168 17:49:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:30.168 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:30.168 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:30.168 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:30.168 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.168 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:30.168 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57187 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57187 ']' 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57187 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57187 00:04:30.169 killing process with pid 57187 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57187' 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57187 00:04:30.169 17:49:11 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57187 00:04:32.702 ************************************ 00:04:32.702 END TEST skip_rpc 00:04:32.702 ************************************ 00:04:32.702 00:04:32.702 real 0m7.910s 00:04:32.702 user 0m7.428s 00:04:32.702 sys 0m0.382s 00:04:32.702 17:49:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.702 17:49:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.702 17:49:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:32.702 17:49:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.702 17:49:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.703 17:49:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.703 ************************************ 00:04:32.703 START TEST skip_rpc_with_json 00:04:32.703 ************************************ 00:04:32.703 17:49:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:32.703 17:49:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:32.703 17:49:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57291 00:04:32.703 17:49:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.703 17:49:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.703 17:49:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57291 00:04:32.703 17:49:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57291 ']' 00:04:32.703 17:49:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.703 17:49:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.703 17:49:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.703 17:49:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.703 17:49:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.703 [2024-11-26 17:49:14.459509] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:04:32.703 [2024-11-26 17:49:14.459747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57291 ] 00:04:32.962 [2024-11-26 17:49:14.653406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.962 [2024-11-26 17:49:14.792922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.346 17:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.346 17:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:34.346 17:49:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:34.346 17:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.346 17:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.346 [2024-11-26 17:49:15.855039] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:34.346 request: 00:04:34.346 { 00:04:34.346 "trtype": "tcp", 00:04:34.346 "method": "nvmf_get_transports", 00:04:34.346 "req_id": 1 00:04:34.346 } 00:04:34.346 Got JSON-RPC error response 00:04:34.346 response: 00:04:34.346 { 00:04:34.346 "code": -19, 00:04:34.346 "message": "No such device" 00:04:34.346 } 00:04:34.346 17:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:34.346 17:49:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:34.346 17:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.346 17:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.346 [2024-11-26 17:49:15.863211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.346 17:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.346 17:49:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:34.346 17:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.346 17:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.346 17:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.346 17:49:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.346 { 00:04:34.346 "subsystems": [ 00:04:34.346 { 00:04:34.346 "subsystem": "fsdev", 00:04:34.346 "config": [ 00:04:34.346 { 00:04:34.346 "method": "fsdev_set_opts", 00:04:34.346 "params": { 00:04:34.346 "fsdev_io_pool_size": 65535, 00:04:34.346 "fsdev_io_cache_size": 256 00:04:34.346 } 00:04:34.346 } 00:04:34.346 ] 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "subsystem": "keyring", 00:04:34.346 "config": [] 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "subsystem": "iobuf", 00:04:34.346 "config": [ 00:04:34.346 { 00:04:34.346 "method": "iobuf_set_options", 00:04:34.346 "params": { 00:04:34.346 "small_pool_count": 8192, 00:04:34.346 "large_pool_count": 1024, 00:04:34.346 "small_bufsize": 8192, 00:04:34.346 "large_bufsize": 135168, 00:04:34.346 "enable_numa": false 00:04:34.346 } 00:04:34.346 } 00:04:34.346 ] 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "subsystem": "sock", 00:04:34.346 "config": [ 00:04:34.346 { 00:04:34.346 "method": "sock_set_default_impl", 00:04:34.346 "params": { 00:04:34.346 "impl_name": "posix" 00:04:34.346 } 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "method": "sock_impl_set_options", 00:04:34.346 "params": { 00:04:34.346 "impl_name": "ssl", 00:04:34.346 "recv_buf_size": 4096, 00:04:34.346 "send_buf_size": 4096, 00:04:34.346 "enable_recv_pipe": true, 00:04:34.346 "enable_quickack": false, 00:04:34.346 "enable_placement_id": 0, 00:04:34.346 "enable_zerocopy_send_server": true, 00:04:34.346 "enable_zerocopy_send_client": false, 00:04:34.346 "zerocopy_threshold": 0, 00:04:34.346 "tls_version": 0, 00:04:34.346 "enable_ktls": false 00:04:34.346 } 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "method": "sock_impl_set_options", 00:04:34.346 "params": { 00:04:34.346 "impl_name": "posix", 00:04:34.346 "recv_buf_size": 2097152, 00:04:34.346 "send_buf_size": 2097152, 00:04:34.346 "enable_recv_pipe": true, 00:04:34.346 "enable_quickack": false, 00:04:34.346 "enable_placement_id": 0, 00:04:34.346 "enable_zerocopy_send_server": true, 00:04:34.346 "enable_zerocopy_send_client": false, 00:04:34.346 "zerocopy_threshold": 0, 00:04:34.346 "tls_version": 0, 00:04:34.346 "enable_ktls": false 00:04:34.346 } 00:04:34.346 } 00:04:34.346 ] 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "subsystem": "vmd", 00:04:34.346 "config": [] 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "subsystem": "accel", 00:04:34.346 "config": [ 00:04:34.346 { 00:04:34.346 "method": "accel_set_options", 00:04:34.346 "params": { 00:04:34.346 "small_cache_size": 128, 00:04:34.346 "large_cache_size": 16, 00:04:34.346 "task_count": 2048, 00:04:34.346 "sequence_count": 2048, 00:04:34.346 "buf_count": 2048 00:04:34.346 } 00:04:34.346 } 00:04:34.346 ] 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "subsystem": "bdev", 00:04:34.346 "config": [ 00:04:34.346 { 00:04:34.346 "method": "bdev_set_options", 00:04:34.346 "params": { 00:04:34.346 "bdev_io_pool_size": 65535, 00:04:34.346 "bdev_io_cache_size": 256, 00:04:34.346 "bdev_auto_examine": true, 00:04:34.346 "iobuf_small_cache_size": 128, 00:04:34.346 "iobuf_large_cache_size": 16 00:04:34.346 } 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "method": "bdev_raid_set_options", 00:04:34.346 "params": { 00:04:34.346 "process_window_size_kb": 1024, 00:04:34.346 "process_max_bandwidth_mb_sec": 0 00:04:34.346 } 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "method": "bdev_iscsi_set_options", 00:04:34.346 "params": { 00:04:34.346 "timeout_sec": 30 00:04:34.346 } 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "method": "bdev_nvme_set_options", 00:04:34.346 "params": { 00:04:34.346 "action_on_timeout": "none", 00:04:34.346 "timeout_us": 0, 00:04:34.346 "timeout_admin_us": 0, 00:04:34.346 "keep_alive_timeout_ms": 10000, 00:04:34.346 "arbitration_burst": 0, 00:04:34.346 "low_priority_weight": 0, 00:04:34.346 "medium_priority_weight": 0, 00:04:34.346 "high_priority_weight": 0, 00:04:34.346 "nvme_adminq_poll_period_us": 10000, 00:04:34.346 "nvme_ioq_poll_period_us": 0, 00:04:34.346 "io_queue_requests": 0, 00:04:34.346 "delay_cmd_submit": true, 00:04:34.346 "transport_retry_count": 4, 00:04:34.346 "bdev_retry_count": 3, 00:04:34.346 "transport_ack_timeout": 0, 00:04:34.346 "ctrlr_loss_timeout_sec": 0, 00:04:34.346 "reconnect_delay_sec": 0, 00:04:34.346 "fast_io_fail_timeout_sec": 0, 00:04:34.346 "disable_auto_failback": false, 00:04:34.346 "generate_uuids": false, 00:04:34.346 "transport_tos": 0, 00:04:34.346 "nvme_error_stat": false, 00:04:34.346 "rdma_srq_size": 0, 00:04:34.346 "io_path_stat": false, 00:04:34.346 "allow_accel_sequence": false, 00:04:34.346 "rdma_max_cq_size": 0, 00:04:34.346 "rdma_cm_event_timeout_ms": 0, 00:04:34.346 "dhchap_digests": [ 00:04:34.346 "sha256", 00:04:34.346 "sha384", 00:04:34.346 "sha512" 00:04:34.346 ], 00:04:34.346 "dhchap_dhgroups": [ 00:04:34.346 "null", 00:04:34.346 "ffdhe2048", 00:04:34.346 "ffdhe3072", 00:04:34.346 "ffdhe4096", 00:04:34.346 "ffdhe6144", 00:04:34.346 "ffdhe8192" 00:04:34.346 ] 00:04:34.346 } 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "method": "bdev_nvme_set_hotplug", 00:04:34.346 "params": { 00:04:34.346 "period_us": 100000, 00:04:34.346 "enable": false 00:04:34.346 } 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "method": "bdev_wait_for_examine" 00:04:34.346 } 00:04:34.346 ] 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "subsystem": "scsi", 00:04:34.346 "config": null 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "subsystem": "scheduler", 00:04:34.346 "config": [ 00:04:34.346 { 00:04:34.346 "method": "framework_set_scheduler", 00:04:34.346 "params": { 00:04:34.346 "name": "static" 00:04:34.346 } 00:04:34.346 } 00:04:34.346 ] 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "subsystem": "vhost_scsi", 00:04:34.346 "config": [] 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "subsystem": "vhost_blk", 00:04:34.346 "config": [] 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "subsystem": "ublk", 00:04:34.346 "config": [] 00:04:34.346 }, 00:04:34.346 { 00:04:34.346 "subsystem": "nbd", 00:04:34.346 "config": [] 00:04:34.346 }, 00:04:34.347 { 00:04:34.347 "subsystem": "nvmf", 00:04:34.347 "config": [ 00:04:34.347 { 00:04:34.347 "method": "nvmf_set_config", 00:04:34.347 "params": { 00:04:34.347 "discovery_filter": "match_any", 00:04:34.347 "admin_cmd_passthru": { 00:04:34.347 "identify_ctrlr": false 00:04:34.347 }, 00:04:34.347 "dhchap_digests": [ 00:04:34.347 "sha256", 00:04:34.347 "sha384", 00:04:34.347 "sha512" 00:04:34.347 ], 00:04:34.347 "dhchap_dhgroups": [ 00:04:34.347 "null", 00:04:34.347 "ffdhe2048", 00:04:34.347 "ffdhe3072", 00:04:34.347 "ffdhe4096", 00:04:34.347 "ffdhe6144", 00:04:34.347 "ffdhe8192" 00:04:34.347 ] 00:04:34.347 } 00:04:34.347 }, 00:04:34.347 { 00:04:34.347 "method": "nvmf_set_max_subsystems", 00:04:34.347 "params": { 00:04:34.347 "max_subsystems": 1024 00:04:34.347 } 00:04:34.347 }, 00:04:34.347 { 00:04:34.347 "method": "nvmf_set_crdt", 00:04:34.347 "params": { 00:04:34.347 "crdt1": 0, 00:04:34.347 "crdt2": 0, 00:04:34.347 "crdt3": 0 00:04:34.347 } 00:04:34.347 }, 00:04:34.347 { 00:04:34.347 "method": "nvmf_create_transport", 00:04:34.347 "params": { 00:04:34.347 "trtype": "TCP", 00:04:34.347 "max_queue_depth": 128, 00:04:34.347 "max_io_qpairs_per_ctrlr": 127, 00:04:34.347 "in_capsule_data_size": 4096, 00:04:34.347 "max_io_size": 131072, 00:04:34.347 "io_unit_size": 131072, 00:04:34.347 "max_aq_depth": 128, 00:04:34.347 "num_shared_buffers": 511, 00:04:34.347 "buf_cache_size": 4294967295, 00:04:34.347 "dif_insert_or_strip": false, 00:04:34.347 "zcopy": false, 00:04:34.347 "c2h_success": true, 00:04:34.347 "sock_priority": 0, 00:04:34.347 "abort_timeout_sec": 1, 00:04:34.347 "ack_timeout": 0, 00:04:34.347 "data_wr_pool_size": 0 00:04:34.347 } 00:04:34.347 } 00:04:34.347 ] 00:04:34.347 }, 00:04:34.347 { 00:04:34.347 "subsystem": "iscsi", 00:04:34.347 "config": [ 00:04:34.347 { 00:04:34.347 "method": "iscsi_set_options", 00:04:34.347 "params": { 00:04:34.347 "node_base": "iqn.2016-06.io.spdk", 00:04:34.347 "max_sessions": 128, 00:04:34.347 "max_connections_per_session": 2, 00:04:34.347 "max_queue_depth": 64, 00:04:34.347 "default_time2wait": 2, 00:04:34.347 "default_time2retain": 20, 00:04:34.347 "first_burst_length": 8192, 00:04:34.347 "immediate_data": true, 00:04:34.347 "allow_duplicated_isid": false, 00:04:34.347 "error_recovery_level": 0, 00:04:34.347 "nop_timeout": 60, 00:04:34.347 "nop_in_interval": 30, 00:04:34.347 "disable_chap": false, 00:04:34.347 "require_chap": false, 00:04:34.347 "mutual_chap": false, 00:04:34.347 "chap_group": 0, 00:04:34.347 "max_large_datain_per_connection": 64, 00:04:34.347 "max_r2t_per_connection": 4, 00:04:34.347 "pdu_pool_size": 36864, 00:04:34.347 "immediate_data_pool_size": 16384, 00:04:34.347 "data_out_pool_size": 2048 00:04:34.347 } 00:04:34.347 } 00:04:34.347 ] 00:04:34.347 } 00:04:34.347 ] 00:04:34.347 } 00:04:34.347 17:49:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:34.347 17:49:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57291 00:04:34.347 17:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57291 ']' 00:04:34.347 17:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57291 00:04:34.347 17:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:34.347 17:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.347 17:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57291 00:04:34.347 17:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.347 killing process with pid 57291 00:04:34.347 17:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.347 17:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57291' 00:04:34.347 17:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57291 00:04:34.347 17:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57291 00:04:37.634 17:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57358 00:04:37.634 17:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:37.634 17:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:42.928 17:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57358 00:04:42.928 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57358 ']' 00:04:42.928 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57358 00:04:42.928 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:42.928 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.928 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57358 00:04:42.928 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.928 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.928 killing process with pid 57358 00:04:42.928 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57358' 00:04:42.928 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57358 00:04:42.928 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57358 00:04:45.462 17:49:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:45.462 17:49:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:45.462 00:04:45.462 real 0m12.669s 00:04:45.462 user 0m12.097s 00:04:45.462 sys 0m0.961s 00:04:45.462 17:49:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.462 17:49:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.462 ************************************ 00:04:45.462 END TEST skip_rpc_with_json 00:04:45.462 ************************************ 00:04:45.462 17:49:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:45.462 17:49:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.462 17:49:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.462 17:49:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.462 ************************************ 00:04:45.462 START TEST skip_rpc_with_delay 00:04:45.462 ************************************ 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:45.462 [2024-11-26 17:49:27.173154] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:45.462 00:04:45.462 real 0m0.210s 00:04:45.462 user 0m0.102s 00:04:45.462 sys 0m0.105s 00:04:45.462 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.463 17:49:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:45.463 ************************************ 00:04:45.463 END TEST skip_rpc_with_delay 00:04:45.463 ************************************ 00:04:45.463 17:49:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:45.463 17:49:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:45.463 17:49:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:45.463 17:49:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.463 17:49:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.463 17:49:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.463 ************************************ 00:04:45.463 START TEST exit_on_failed_rpc_init 00:04:45.463 ************************************ 00:04:45.463 17:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:45.463 17:49:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57498 00:04:45.463 17:49:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.463 17:49:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57498 00:04:45.463 17:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57498 ']' 00:04:45.463 17:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.463 17:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.463 17:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.463 17:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.463 17:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:45.723 [2024-11-26 17:49:27.439043] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:04:45.723 [2024-11-26 17:49:27.439225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57498 ] 00:04:45.983 [2024-11-26 17:49:27.625995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.983 [2024-11-26 17:49:27.753331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.920 17:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.920 17:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:46.920 17:49:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.920 17:49:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.920 17:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:46.920 17:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.920 17:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.920 17:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.920 17:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.920 17:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.920 17:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.920 17:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.920 17:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.920 17:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:46.920 17:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:47.178 [2024-11-26 17:49:28.851710] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:04:47.178 [2024-11-26 17:49:28.851897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57516 ] 00:04:47.473 [2024-11-26 17:49:29.039966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.473 [2024-11-26 17:49:29.201733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.473 [2024-11-26 17:49:29.201908] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:47.473 [2024-11-26 17:49:29.201930] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:47.473 [2024-11-26 17:49:29.201948] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57498 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57498 ']' 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57498 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57498 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.735 killing process with pid 57498 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57498' 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57498 00:04:47.735 17:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57498 00:04:51.022 00:04:51.022 real 0m5.140s 00:04:51.022 user 0m5.653s 00:04:51.022 sys 0m0.700s 00:04:51.022 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.022 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.022 ************************************ 00:04:51.022 END TEST exit_on_failed_rpc_init 00:04:51.022 ************************************ 00:04:51.022 17:49:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:51.022 00:04:51.022 real 0m26.415s 00:04:51.022 user 0m25.495s 00:04:51.022 sys 0m2.439s 00:04:51.022 17:49:32 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.023 17:49:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.023 ************************************ 00:04:51.023 END TEST skip_rpc 00:04:51.023 ************************************ 00:04:51.023 17:49:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:51.023 17:49:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.023 17:49:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.023 17:49:32 -- common/autotest_common.sh@10 -- # set +x 00:04:51.023 ************************************ 00:04:51.023 START TEST rpc_client 00:04:51.023 ************************************ 00:04:51.023 17:49:32 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:51.023 * Looking for test storage... 00:04:51.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:51.023 17:49:32 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:51.023 17:49:32 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:51.023 17:49:32 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:51.023 17:49:32 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.023 17:49:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:51.023 17:49:32 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.023 17:49:32 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:51.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.023 --rc genhtml_branch_coverage=1 00:04:51.023 --rc genhtml_function_coverage=1 00:04:51.023 --rc genhtml_legend=1 00:04:51.023 --rc geninfo_all_blocks=1 00:04:51.023 --rc geninfo_unexecuted_blocks=1 00:04:51.023 00:04:51.023 ' 00:04:51.023 17:49:32 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:51.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.023 --rc genhtml_branch_coverage=1 00:04:51.023 --rc genhtml_function_coverage=1 00:04:51.023 --rc genhtml_legend=1 00:04:51.023 --rc geninfo_all_blocks=1 00:04:51.023 --rc geninfo_unexecuted_blocks=1 00:04:51.023 00:04:51.023 ' 00:04:51.023 17:49:32 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:51.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.023 --rc genhtml_branch_coverage=1 00:04:51.023 --rc genhtml_function_coverage=1 00:04:51.023 --rc genhtml_legend=1 00:04:51.023 --rc geninfo_all_blocks=1 00:04:51.023 --rc geninfo_unexecuted_blocks=1 00:04:51.023 00:04:51.023 ' 00:04:51.023 17:49:32 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:51.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.023 --rc genhtml_branch_coverage=1 00:04:51.023 --rc genhtml_function_coverage=1 00:04:51.023 --rc genhtml_legend=1 00:04:51.023 --rc geninfo_all_blocks=1 00:04:51.023 --rc geninfo_unexecuted_blocks=1 00:04:51.023 00:04:51.023 ' 00:04:51.023 17:49:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:51.023 OK 00:04:51.023 17:49:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:51.023 00:04:51.023 real 0m0.315s 00:04:51.023 user 0m0.174s 00:04:51.023 sys 0m0.155s 00:04:51.023 17:49:32 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.023 17:49:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:51.023 ************************************ 00:04:51.023 END TEST rpc_client 00:04:51.023 ************************************ 00:04:51.282 17:49:32 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:51.282 17:49:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.282 17:49:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.282 17:49:32 -- common/autotest_common.sh@10 -- # set +x 00:04:51.282 ************************************ 00:04:51.282 START TEST json_config 00:04:51.282 ************************************ 00:04:51.282 17:49:32 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:51.282 17:49:32 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:51.282 17:49:32 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:51.282 17:49:32 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:51.282 17:49:33 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:51.282 17:49:33 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.282 17:49:33 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.282 17:49:33 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.282 17:49:33 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.282 17:49:33 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.282 17:49:33 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.282 17:49:33 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.282 17:49:33 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.282 17:49:33 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.282 17:49:33 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.282 17:49:33 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.282 17:49:33 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:51.282 17:49:33 json_config -- scripts/common.sh@345 -- # : 1 00:04:51.282 17:49:33 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.282 17:49:33 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.282 17:49:33 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:51.282 17:49:33 json_config -- scripts/common.sh@353 -- # local d=1 00:04:51.282 17:49:33 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.282 17:49:33 json_config -- scripts/common.sh@355 -- # echo 1 00:04:51.282 17:49:33 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.282 17:49:33 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:51.282 17:49:33 json_config -- scripts/common.sh@353 -- # local d=2 00:04:51.282 17:49:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.282 17:49:33 json_config -- scripts/common.sh@355 -- # echo 2 00:04:51.282 17:49:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.282 17:49:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.282 17:49:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.282 17:49:33 json_config -- scripts/common.sh@368 -- # return 0 00:04:51.282 17:49:33 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.282 17:49:33 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:51.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.282 --rc genhtml_branch_coverage=1 00:04:51.282 --rc genhtml_function_coverage=1 00:04:51.282 --rc genhtml_legend=1 00:04:51.282 --rc geninfo_all_blocks=1 00:04:51.282 --rc geninfo_unexecuted_blocks=1 00:04:51.282 00:04:51.282 ' 00:04:51.282 17:49:33 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:51.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.282 --rc genhtml_branch_coverage=1 00:04:51.282 --rc genhtml_function_coverage=1 00:04:51.282 --rc genhtml_legend=1 00:04:51.282 --rc geninfo_all_blocks=1 00:04:51.282 --rc geninfo_unexecuted_blocks=1 00:04:51.282 00:04:51.282 ' 00:04:51.282 17:49:33 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:51.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.282 --rc genhtml_branch_coverage=1 00:04:51.282 --rc genhtml_function_coverage=1 00:04:51.282 --rc genhtml_legend=1 00:04:51.282 --rc geninfo_all_blocks=1 00:04:51.282 --rc geninfo_unexecuted_blocks=1 00:04:51.282 00:04:51.282 ' 00:04:51.282 17:49:33 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:51.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.283 --rc genhtml_branch_coverage=1 00:04:51.283 --rc genhtml_function_coverage=1 00:04:51.283 --rc genhtml_legend=1 00:04:51.283 --rc geninfo_all_blocks=1 00:04:51.283 --rc geninfo_unexecuted_blocks=1 00:04:51.283 00:04:51.283 ' 00:04:51.283 17:49:33 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b590854-7bd7-4381-93fd-b908217718d3 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=6b590854-7bd7-4381-93fd-b908217718d3 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:51.283 17:49:33 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.283 17:49:33 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.283 17:49:33 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.283 17:49:33 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.283 17:49:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.283 17:49:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.283 17:49:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.283 17:49:33 json_config -- paths/export.sh@5 -- # export PATH 00:04:51.283 17:49:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@51 -- # : 0 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.283 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.283 17:49:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.283 17:49:33 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:51.283 17:49:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:51.283 17:49:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:51.283 17:49:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:51.283 17:49:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:51.283 17:49:33 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:51.283 WARNING: No tests are enabled so not running JSON configuration tests 00:04:51.283 17:49:33 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:51.283 00:04:51.283 real 0m0.217s 00:04:51.283 user 0m0.127s 00:04:51.283 sys 0m0.092s 00:04:51.283 17:49:33 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.283 17:49:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.283 ************************************ 00:04:51.283 END TEST json_config 00:04:51.283 ************************************ 00:04:51.543 17:49:33 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:51.543 17:49:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.543 17:49:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.543 17:49:33 -- common/autotest_common.sh@10 -- # set +x 00:04:51.543 ************************************ 00:04:51.543 START TEST json_config_extra_key 00:04:51.543 ************************************ 00:04:51.543 17:49:33 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:51.543 17:49:33 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:51.543 17:49:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:51.543 17:49:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:51.543 17:49:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.543 17:49:33 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:51.543 17:49:33 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.543 17:49:33 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:51.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.543 --rc genhtml_branch_coverage=1 00:04:51.543 --rc genhtml_function_coverage=1 00:04:51.543 --rc genhtml_legend=1 00:04:51.543 --rc geninfo_all_blocks=1 00:04:51.543 --rc geninfo_unexecuted_blocks=1 00:04:51.543 00:04:51.543 ' 00:04:51.543 17:49:33 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:51.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.543 --rc genhtml_branch_coverage=1 00:04:51.543 --rc genhtml_function_coverage=1 00:04:51.543 --rc genhtml_legend=1 00:04:51.543 --rc geninfo_all_blocks=1 00:04:51.543 --rc geninfo_unexecuted_blocks=1 00:04:51.543 00:04:51.543 ' 00:04:51.543 17:49:33 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:51.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.543 --rc genhtml_branch_coverage=1 00:04:51.543 --rc genhtml_function_coverage=1 00:04:51.543 --rc genhtml_legend=1 00:04:51.543 --rc geninfo_all_blocks=1 00:04:51.543 --rc geninfo_unexecuted_blocks=1 00:04:51.543 00:04:51.543 ' 00:04:51.544 17:49:33 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:51.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.544 --rc genhtml_branch_coverage=1 00:04:51.544 --rc genhtml_function_coverage=1 00:04:51.544 --rc genhtml_legend=1 00:04:51.544 --rc geninfo_all_blocks=1 00:04:51.544 --rc geninfo_unexecuted_blocks=1 00:04:51.544 00:04:51.544 ' 00:04:51.544 17:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b590854-7bd7-4381-93fd-b908217718d3 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=6b590854-7bd7-4381-93fd-b908217718d3 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.544 17:49:33 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:51.544 17:49:33 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.804 17:49:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.804 17:49:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.804 17:49:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.804 17:49:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.804 17:49:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.804 17:49:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.804 17:49:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:51.804 17:49:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.804 17:49:33 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:51.804 17:49:33 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.804 17:49:33 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.804 17:49:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.804 17:49:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.804 17:49:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.804 17:49:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.804 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.804 17:49:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.804 17:49:33 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.804 17:49:33 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.804 17:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:51.804 17:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:51.804 17:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:51.804 17:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:51.804 17:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:51.804 17:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:51.804 17:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:51.804 17:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:51.804 17:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:51.804 17:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.804 INFO: launching applications... 00:04:51.804 17:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:51.804 17:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:51.804 17:49:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:51.804 17:49:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:51.804 17:49:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.804 17:49:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.804 17:49:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.804 17:49:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.804 17:49:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.804 17:49:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57737 00:04:51.804 Waiting for target to run... 00:04:51.804 17:49:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.804 17:49:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57737 /var/tmp/spdk_tgt.sock 00:04:51.804 17:49:33 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57737 ']' 00:04:51.804 17:49:33 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.804 17:49:33 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.804 17:49:33 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:51.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.804 17:49:33 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.804 17:49:33 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.804 17:49:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.804 [2024-11-26 17:49:33.554873] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:04:51.804 [2024-11-26 17:49:33.555085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57737 ] 00:04:52.401 [2024-11-26 17:49:34.153753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.677 [2024-11-26 17:49:34.281103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.614 17:49:35 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.614 17:49:35 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:53.614 00:04:53.614 17:49:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:53.614 INFO: shutting down applications... 00:04:53.614 17:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:53.614 17:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:53.614 17:49:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:53.614 17:49:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:53.614 17:49:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57737 ]] 00:04:53.614 17:49:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57737 00:04:53.614 17:49:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:53.614 17:49:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.614 17:49:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57737 00:04:53.614 17:49:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.874 17:49:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.874 17:49:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.874 17:49:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57737 00:04:53.874 17:49:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.443 17:49:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.443 17:49:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.443 17:49:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57737 00:04:54.443 17:49:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.011 17:49:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.012 17:49:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.012 17:49:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57737 00:04:55.012 17:49:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.580 17:49:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.580 17:49:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.580 17:49:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57737 00:04:55.580 17:49:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.840 17:49:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.840 17:49:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.840 17:49:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57737 00:04:55.840 17:49:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.408 17:49:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.408 17:49:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.408 17:49:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57737 00:04:56.408 17:49:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.976 17:49:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.976 17:49:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.976 17:49:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57737 00:04:56.976 17:49:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:56.976 17:49:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:56.976 17:49:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:56.976 SPDK target shutdown done 00:04:56.976 17:49:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:56.976 Success 00:04:56.976 17:49:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:56.976 00:04:56.976 real 0m5.468s 00:04:56.976 user 0m4.839s 00:04:56.976 sys 0m0.825s 00:04:56.976 17:49:38 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.976 17:49:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:56.976 ************************************ 00:04:56.976 END TEST json_config_extra_key 00:04:56.976 ************************************ 00:04:56.976 17:49:38 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:56.976 17:49:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.976 17:49:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.976 17:49:38 -- common/autotest_common.sh@10 -- # set +x 00:04:56.976 ************************************ 00:04:56.976 START TEST alias_rpc 00:04:56.976 ************************************ 00:04:56.977 17:49:38 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.235 * Looking for test storage... 00:04:57.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:57.235 17:49:38 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.235 17:49:38 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.235 17:49:38 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.235 17:49:38 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.235 17:49:38 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:57.235 17:49:38 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.235 17:49:38 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.235 --rc genhtml_branch_coverage=1 00:04:57.235 --rc genhtml_function_coverage=1 00:04:57.235 --rc genhtml_legend=1 00:04:57.235 --rc geninfo_all_blocks=1 00:04:57.235 --rc geninfo_unexecuted_blocks=1 00:04:57.235 00:04:57.235 ' 00:04:57.235 17:49:38 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.235 --rc genhtml_branch_coverage=1 00:04:57.235 --rc genhtml_function_coverage=1 00:04:57.235 --rc genhtml_legend=1 00:04:57.235 --rc geninfo_all_blocks=1 00:04:57.235 --rc geninfo_unexecuted_blocks=1 00:04:57.235 00:04:57.235 ' 00:04:57.235 17:49:38 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.235 --rc genhtml_branch_coverage=1 00:04:57.235 --rc genhtml_function_coverage=1 00:04:57.235 --rc genhtml_legend=1 00:04:57.235 --rc geninfo_all_blocks=1 00:04:57.235 --rc geninfo_unexecuted_blocks=1 00:04:57.235 00:04:57.235 ' 00:04:57.235 17:49:38 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.235 --rc genhtml_branch_coverage=1 00:04:57.235 --rc genhtml_function_coverage=1 00:04:57.235 --rc genhtml_legend=1 00:04:57.235 --rc geninfo_all_blocks=1 00:04:57.235 --rc geninfo_unexecuted_blocks=1 00:04:57.235 00:04:57.235 ' 00:04:57.235 17:49:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:57.235 17:49:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57855 00:04:57.235 17:49:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.235 17:49:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57855 00:04:57.235 17:49:38 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57855 ']' 00:04:57.235 17:49:38 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.235 17:49:38 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.235 17:49:38 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.235 17:49:38 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.235 17:49:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.494 [2024-11-26 17:49:39.109881] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:04:57.494 [2024-11-26 17:49:39.110094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57855 ] 00:04:57.494 [2024-11-26 17:49:39.281705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.750 [2024-11-26 17:49:39.450607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.127 17:49:40 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.127 17:49:40 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:59.127 17:49:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:59.127 17:49:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57855 00:04:59.127 17:49:40 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57855 ']' 00:04:59.127 17:49:40 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57855 00:04:59.128 17:49:40 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:59.128 17:49:40 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.128 17:49:40 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57855 00:04:59.412 killing process with pid 57855 00:04:59.412 17:49:40 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.412 17:49:40 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.412 17:49:41 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57855' 00:04:59.412 17:49:41 alias_rpc -- common/autotest_common.sh@973 -- # kill 57855 00:04:59.412 17:49:41 alias_rpc -- common/autotest_common.sh@978 -- # wait 57855 00:05:02.702 00:05:02.702 real 0m5.336s 00:05:02.702 user 0m5.273s 00:05:02.702 sys 0m0.847s 00:05:02.702 17:49:44 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.702 17:49:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.702 ************************************ 00:05:02.702 END TEST alias_rpc 00:05:02.702 ************************************ 00:05:02.702 17:49:44 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:02.702 17:49:44 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:02.702 17:49:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.702 17:49:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.702 17:49:44 -- common/autotest_common.sh@10 -- # set +x 00:05:02.702 ************************************ 00:05:02.702 START TEST spdkcli_tcp 00:05:02.702 ************************************ 00:05:02.702 17:49:44 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:02.702 * Looking for test storage... 00:05:02.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:02.702 17:49:44 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.702 17:49:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.702 17:49:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.702 17:49:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:02.702 17:49:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:02.703 17:49:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.703 17:49:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:02.703 17:49:44 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.703 17:49:44 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.703 17:49:44 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.703 17:49:44 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:02.703 17:49:44 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.703 17:49:44 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.703 --rc genhtml_branch_coverage=1 00:05:02.703 --rc genhtml_function_coverage=1 00:05:02.703 --rc genhtml_legend=1 00:05:02.703 --rc geninfo_all_blocks=1 00:05:02.703 --rc geninfo_unexecuted_blocks=1 00:05:02.703 00:05:02.703 ' 00:05:02.703 17:49:44 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.703 --rc genhtml_branch_coverage=1 00:05:02.703 --rc genhtml_function_coverage=1 00:05:02.703 --rc genhtml_legend=1 00:05:02.703 --rc geninfo_all_blocks=1 00:05:02.703 --rc geninfo_unexecuted_blocks=1 00:05:02.703 00:05:02.703 ' 00:05:02.703 17:49:44 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.703 --rc genhtml_branch_coverage=1 00:05:02.703 --rc genhtml_function_coverage=1 00:05:02.703 --rc genhtml_legend=1 00:05:02.703 --rc geninfo_all_blocks=1 00:05:02.703 --rc geninfo_unexecuted_blocks=1 00:05:02.703 00:05:02.703 ' 00:05:02.703 17:49:44 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.703 --rc genhtml_branch_coverage=1 00:05:02.703 --rc genhtml_function_coverage=1 00:05:02.703 --rc genhtml_legend=1 00:05:02.703 --rc geninfo_all_blocks=1 00:05:02.703 --rc geninfo_unexecuted_blocks=1 00:05:02.703 00:05:02.703 ' 00:05:02.703 17:49:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:02.703 17:49:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:02.703 17:49:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:02.703 17:49:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:02.703 17:49:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:02.703 17:49:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:02.703 17:49:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:02.703 17:49:44 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.703 17:49:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.703 17:49:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57979 00:05:02.703 17:49:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:02.703 17:49:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57979 00:05:02.703 17:49:44 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57979 ']' 00:05:02.703 17:49:44 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.703 17:49:44 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.703 17:49:44 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.703 17:49:44 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.703 17:49:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.703 [2024-11-26 17:49:44.472172] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:05:02.703 [2024-11-26 17:49:44.472316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57979 ] 00:05:02.961 [2024-11-26 17:49:44.652554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.961 [2024-11-26 17:49:44.793256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.961 [2024-11-26 17:49:44.793310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.341 17:49:45 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.341 17:49:45 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:04.341 17:49:45 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57996 00:05:04.341 17:49:45 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:04.341 17:49:45 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:04.341 [ 00:05:04.341 "bdev_malloc_delete", 00:05:04.341 "bdev_malloc_create", 00:05:04.341 "bdev_null_resize", 00:05:04.341 "bdev_null_delete", 00:05:04.341 "bdev_null_create", 00:05:04.341 "bdev_nvme_cuse_unregister", 00:05:04.341 "bdev_nvme_cuse_register", 00:05:04.341 "bdev_opal_new_user", 00:05:04.341 "bdev_opal_set_lock_state", 00:05:04.341 "bdev_opal_delete", 00:05:04.341 "bdev_opal_get_info", 00:05:04.341 "bdev_opal_create", 00:05:04.341 "bdev_nvme_opal_revert", 00:05:04.341 "bdev_nvme_opal_init", 00:05:04.341 "bdev_nvme_send_cmd", 00:05:04.341 "bdev_nvme_set_keys", 00:05:04.341 "bdev_nvme_get_path_iostat", 00:05:04.341 "bdev_nvme_get_mdns_discovery_info", 00:05:04.341 "bdev_nvme_stop_mdns_discovery", 00:05:04.341 "bdev_nvme_start_mdns_discovery", 00:05:04.341 "bdev_nvme_set_multipath_policy", 00:05:04.341 "bdev_nvme_set_preferred_path", 00:05:04.341 "bdev_nvme_get_io_paths", 00:05:04.341 "bdev_nvme_remove_error_injection", 00:05:04.341 "bdev_nvme_add_error_injection", 00:05:04.341 "bdev_nvme_get_discovery_info", 00:05:04.341 "bdev_nvme_stop_discovery", 00:05:04.341 "bdev_nvme_start_discovery", 00:05:04.341 "bdev_nvme_get_controller_health_info", 00:05:04.341 "bdev_nvme_disable_controller", 00:05:04.341 "bdev_nvme_enable_controller", 00:05:04.341 "bdev_nvme_reset_controller", 00:05:04.341 "bdev_nvme_get_transport_statistics", 00:05:04.341 "bdev_nvme_apply_firmware", 00:05:04.341 "bdev_nvme_detach_controller", 00:05:04.341 "bdev_nvme_get_controllers", 00:05:04.341 "bdev_nvme_attach_controller", 00:05:04.341 "bdev_nvme_set_hotplug", 00:05:04.341 "bdev_nvme_set_options", 00:05:04.341 "bdev_passthru_delete", 00:05:04.341 "bdev_passthru_create", 00:05:04.341 "bdev_lvol_set_parent_bdev", 00:05:04.341 "bdev_lvol_set_parent", 00:05:04.341 "bdev_lvol_check_shallow_copy", 00:05:04.342 "bdev_lvol_start_shallow_copy", 00:05:04.342 "bdev_lvol_grow_lvstore", 00:05:04.342 "bdev_lvol_get_lvols", 00:05:04.342 "bdev_lvol_get_lvstores", 00:05:04.342 "bdev_lvol_delete", 00:05:04.342 "bdev_lvol_set_read_only", 00:05:04.342 "bdev_lvol_resize", 00:05:04.342 "bdev_lvol_decouple_parent", 00:05:04.342 "bdev_lvol_inflate", 00:05:04.342 "bdev_lvol_rename", 00:05:04.342 "bdev_lvol_clone_bdev", 00:05:04.342 "bdev_lvol_clone", 00:05:04.342 "bdev_lvol_snapshot", 00:05:04.342 "bdev_lvol_create", 00:05:04.342 "bdev_lvol_delete_lvstore", 00:05:04.342 "bdev_lvol_rename_lvstore", 00:05:04.342 "bdev_lvol_create_lvstore", 00:05:04.342 "bdev_raid_set_options", 00:05:04.342 "bdev_raid_remove_base_bdev", 00:05:04.342 "bdev_raid_add_base_bdev", 00:05:04.342 "bdev_raid_delete", 00:05:04.342 "bdev_raid_create", 00:05:04.342 "bdev_raid_get_bdevs", 00:05:04.342 "bdev_error_inject_error", 00:05:04.342 "bdev_error_delete", 00:05:04.342 "bdev_error_create", 00:05:04.342 "bdev_split_delete", 00:05:04.342 "bdev_split_create", 00:05:04.342 "bdev_delay_delete", 00:05:04.342 "bdev_delay_create", 00:05:04.342 "bdev_delay_update_latency", 00:05:04.342 "bdev_zone_block_delete", 00:05:04.342 "bdev_zone_block_create", 00:05:04.342 "blobfs_create", 00:05:04.342 "blobfs_detect", 00:05:04.342 "blobfs_set_cache_size", 00:05:04.342 "bdev_aio_delete", 00:05:04.342 "bdev_aio_rescan", 00:05:04.342 "bdev_aio_create", 00:05:04.342 "bdev_ftl_set_property", 00:05:04.342 "bdev_ftl_get_properties", 00:05:04.342 "bdev_ftl_get_stats", 00:05:04.342 "bdev_ftl_unmap", 00:05:04.342 "bdev_ftl_unload", 00:05:04.342 "bdev_ftl_delete", 00:05:04.342 "bdev_ftl_load", 00:05:04.342 "bdev_ftl_create", 00:05:04.342 "bdev_virtio_attach_controller", 00:05:04.342 "bdev_virtio_scsi_get_devices", 00:05:04.342 "bdev_virtio_detach_controller", 00:05:04.342 "bdev_virtio_blk_set_hotplug", 00:05:04.342 "bdev_iscsi_delete", 00:05:04.342 "bdev_iscsi_create", 00:05:04.342 "bdev_iscsi_set_options", 00:05:04.342 "accel_error_inject_error", 00:05:04.342 "ioat_scan_accel_module", 00:05:04.342 "dsa_scan_accel_module", 00:05:04.342 "iaa_scan_accel_module", 00:05:04.342 "keyring_file_remove_key", 00:05:04.342 "keyring_file_add_key", 00:05:04.342 "keyring_linux_set_options", 00:05:04.342 "fsdev_aio_delete", 00:05:04.342 "fsdev_aio_create", 00:05:04.342 "iscsi_get_histogram", 00:05:04.342 "iscsi_enable_histogram", 00:05:04.342 "iscsi_set_options", 00:05:04.342 "iscsi_get_auth_groups", 00:05:04.342 "iscsi_auth_group_remove_secret", 00:05:04.342 "iscsi_auth_group_add_secret", 00:05:04.342 "iscsi_delete_auth_group", 00:05:04.342 "iscsi_create_auth_group", 00:05:04.342 "iscsi_set_discovery_auth", 00:05:04.342 "iscsi_get_options", 00:05:04.342 "iscsi_target_node_request_logout", 00:05:04.342 "iscsi_target_node_set_redirect", 00:05:04.342 "iscsi_target_node_set_auth", 00:05:04.342 "iscsi_target_node_add_lun", 00:05:04.342 "iscsi_get_stats", 00:05:04.342 "iscsi_get_connections", 00:05:04.342 "iscsi_portal_group_set_auth", 00:05:04.342 "iscsi_start_portal_group", 00:05:04.342 "iscsi_delete_portal_group", 00:05:04.342 "iscsi_create_portal_group", 00:05:04.342 "iscsi_get_portal_groups", 00:05:04.342 "iscsi_delete_target_node", 00:05:04.342 "iscsi_target_node_remove_pg_ig_maps", 00:05:04.342 "iscsi_target_node_add_pg_ig_maps", 00:05:04.342 "iscsi_create_target_node", 00:05:04.342 "iscsi_get_target_nodes", 00:05:04.342 "iscsi_delete_initiator_group", 00:05:04.342 "iscsi_initiator_group_remove_initiators", 00:05:04.342 "iscsi_initiator_group_add_initiators", 00:05:04.342 "iscsi_create_initiator_group", 00:05:04.342 "iscsi_get_initiator_groups", 00:05:04.342 "nvmf_set_crdt", 00:05:04.342 "nvmf_set_config", 00:05:04.342 "nvmf_set_max_subsystems", 00:05:04.342 "nvmf_stop_mdns_prr", 00:05:04.342 "nvmf_publish_mdns_prr", 00:05:04.342 "nvmf_subsystem_get_listeners", 00:05:04.342 "nvmf_subsystem_get_qpairs", 00:05:04.342 "nvmf_subsystem_get_controllers", 00:05:04.342 "nvmf_get_stats", 00:05:04.342 "nvmf_get_transports", 00:05:04.342 "nvmf_create_transport", 00:05:04.342 "nvmf_get_targets", 00:05:04.342 "nvmf_delete_target", 00:05:04.342 "nvmf_create_target", 00:05:04.342 "nvmf_subsystem_allow_any_host", 00:05:04.342 "nvmf_subsystem_set_keys", 00:05:04.342 "nvmf_subsystem_remove_host", 00:05:04.342 "nvmf_subsystem_add_host", 00:05:04.342 "nvmf_ns_remove_host", 00:05:04.342 "nvmf_ns_add_host", 00:05:04.342 "nvmf_subsystem_remove_ns", 00:05:04.342 "nvmf_subsystem_set_ns_ana_group", 00:05:04.342 "nvmf_subsystem_add_ns", 00:05:04.342 "nvmf_subsystem_listener_set_ana_state", 00:05:04.342 "nvmf_discovery_get_referrals", 00:05:04.342 "nvmf_discovery_remove_referral", 00:05:04.342 "nvmf_discovery_add_referral", 00:05:04.342 "nvmf_subsystem_remove_listener", 00:05:04.342 "nvmf_subsystem_add_listener", 00:05:04.342 "nvmf_delete_subsystem", 00:05:04.342 "nvmf_create_subsystem", 00:05:04.342 "nvmf_get_subsystems", 00:05:04.342 "env_dpdk_get_mem_stats", 00:05:04.342 "nbd_get_disks", 00:05:04.342 "nbd_stop_disk", 00:05:04.342 "nbd_start_disk", 00:05:04.342 "ublk_recover_disk", 00:05:04.342 "ublk_get_disks", 00:05:04.342 "ublk_stop_disk", 00:05:04.342 "ublk_start_disk", 00:05:04.342 "ublk_destroy_target", 00:05:04.342 "ublk_create_target", 00:05:04.342 "virtio_blk_create_transport", 00:05:04.342 "virtio_blk_get_transports", 00:05:04.342 "vhost_controller_set_coalescing", 00:05:04.342 "vhost_get_controllers", 00:05:04.342 "vhost_delete_controller", 00:05:04.342 "vhost_create_blk_controller", 00:05:04.342 "vhost_scsi_controller_remove_target", 00:05:04.342 "vhost_scsi_controller_add_target", 00:05:04.342 "vhost_start_scsi_controller", 00:05:04.342 "vhost_create_scsi_controller", 00:05:04.342 "thread_set_cpumask", 00:05:04.342 "scheduler_set_options", 00:05:04.342 "framework_get_governor", 00:05:04.342 "framework_get_scheduler", 00:05:04.342 "framework_set_scheduler", 00:05:04.342 "framework_get_reactors", 00:05:04.342 "thread_get_io_channels", 00:05:04.342 "thread_get_pollers", 00:05:04.342 "thread_get_stats", 00:05:04.342 "framework_monitor_context_switch", 00:05:04.342 "spdk_kill_instance", 00:05:04.342 "log_enable_timestamps", 00:05:04.342 "log_get_flags", 00:05:04.342 "log_clear_flag", 00:05:04.342 "log_set_flag", 00:05:04.342 "log_get_level", 00:05:04.342 "log_set_level", 00:05:04.342 "log_get_print_level", 00:05:04.342 "log_set_print_level", 00:05:04.342 "framework_enable_cpumask_locks", 00:05:04.342 "framework_disable_cpumask_locks", 00:05:04.342 "framework_wait_init", 00:05:04.342 "framework_start_init", 00:05:04.342 "scsi_get_devices", 00:05:04.342 "bdev_get_histogram", 00:05:04.342 "bdev_enable_histogram", 00:05:04.342 "bdev_set_qos_limit", 00:05:04.342 "bdev_set_qd_sampling_period", 00:05:04.342 "bdev_get_bdevs", 00:05:04.342 "bdev_reset_iostat", 00:05:04.342 "bdev_get_iostat", 00:05:04.342 "bdev_examine", 00:05:04.342 "bdev_wait_for_examine", 00:05:04.342 "bdev_set_options", 00:05:04.342 "accel_get_stats", 00:05:04.342 "accel_set_options", 00:05:04.342 "accel_set_driver", 00:05:04.342 "accel_crypto_key_destroy", 00:05:04.342 "accel_crypto_keys_get", 00:05:04.342 "accel_crypto_key_create", 00:05:04.342 "accel_assign_opc", 00:05:04.342 "accel_get_module_info", 00:05:04.342 "accel_get_opc_assignments", 00:05:04.342 "vmd_rescan", 00:05:04.342 "vmd_remove_device", 00:05:04.342 "vmd_enable", 00:05:04.342 "sock_get_default_impl", 00:05:04.342 "sock_set_default_impl", 00:05:04.342 "sock_impl_set_options", 00:05:04.342 "sock_impl_get_options", 00:05:04.342 "iobuf_get_stats", 00:05:04.342 "iobuf_set_options", 00:05:04.342 "keyring_get_keys", 00:05:04.342 "framework_get_pci_devices", 00:05:04.342 "framework_get_config", 00:05:04.342 "framework_get_subsystems", 00:05:04.342 "fsdev_set_opts", 00:05:04.342 "fsdev_get_opts", 00:05:04.342 "trace_get_info", 00:05:04.342 "trace_get_tpoint_group_mask", 00:05:04.342 "trace_disable_tpoint_group", 00:05:04.342 "trace_enable_tpoint_group", 00:05:04.342 "trace_clear_tpoint_mask", 00:05:04.342 "trace_set_tpoint_mask", 00:05:04.342 "notify_get_notifications", 00:05:04.342 "notify_get_types", 00:05:04.342 "spdk_get_version", 00:05:04.342 "rpc_get_methods" 00:05:04.342 ] 00:05:04.342 17:49:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:04.342 17:49:46 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.342 17:49:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.602 17:49:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:04.602 17:49:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57979 00:05:04.602 17:49:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57979 ']' 00:05:04.602 17:49:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57979 00:05:04.602 17:49:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:04.602 17:49:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.602 17:49:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57979 00:05:04.602 killing process with pid 57979 00:05:04.602 17:49:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.602 17:49:46 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.602 17:49:46 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57979' 00:05:04.602 17:49:46 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57979 00:05:04.602 17:49:46 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57979 00:05:07.894 00:05:07.894 real 0m5.017s 00:05:07.894 user 0m8.821s 00:05:07.894 sys 0m0.895s 00:05:07.894 17:49:49 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.894 17:49:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:07.894 ************************************ 00:05:07.894 END TEST spdkcli_tcp 00:05:07.894 ************************************ 00:05:07.894 17:49:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:07.894 17:49:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.894 17:49:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.894 17:49:49 -- common/autotest_common.sh@10 -- # set +x 00:05:07.894 ************************************ 00:05:07.894 START TEST dpdk_mem_utility 00:05:07.894 ************************************ 00:05:07.894 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:07.894 * Looking for test storage... 00:05:07.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:07.894 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:07.894 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:07.894 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:07.894 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.894 17:49:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:07.894 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.894 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:07.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.894 --rc genhtml_branch_coverage=1 00:05:07.894 --rc genhtml_function_coverage=1 00:05:07.894 --rc genhtml_legend=1 00:05:07.894 --rc geninfo_all_blocks=1 00:05:07.894 --rc geninfo_unexecuted_blocks=1 00:05:07.894 00:05:07.894 ' 00:05:07.894 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:07.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.894 --rc genhtml_branch_coverage=1 00:05:07.894 --rc genhtml_function_coverage=1 00:05:07.894 --rc genhtml_legend=1 00:05:07.894 --rc geninfo_all_blocks=1 00:05:07.894 --rc geninfo_unexecuted_blocks=1 00:05:07.894 00:05:07.895 ' 00:05:07.895 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:07.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.895 --rc genhtml_branch_coverage=1 00:05:07.895 --rc genhtml_function_coverage=1 00:05:07.895 --rc genhtml_legend=1 00:05:07.895 --rc geninfo_all_blocks=1 00:05:07.895 --rc geninfo_unexecuted_blocks=1 00:05:07.895 00:05:07.895 ' 00:05:07.895 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:07.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.895 --rc genhtml_branch_coverage=1 00:05:07.895 --rc genhtml_function_coverage=1 00:05:07.895 --rc genhtml_legend=1 00:05:07.895 --rc geninfo_all_blocks=1 00:05:07.895 --rc geninfo_unexecuted_blocks=1 00:05:07.895 00:05:07.895 ' 00:05:07.895 17:49:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:07.895 17:49:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58112 00:05:07.895 17:49:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.895 17:49:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58112 00:05:07.895 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58112 ']' 00:05:07.895 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.895 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.895 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.895 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.895 17:49:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.895 [2024-11-26 17:49:49.545197] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:05:07.895 [2024-11-26 17:49:49.545393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58112 ] 00:05:07.895 [2024-11-26 17:49:49.716957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.155 [2024-11-26 17:49:49.861070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.095 17:49:50 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.095 17:49:50 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:09.095 17:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:09.095 17:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:09.095 17:49:50 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.095 17:49:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:09.095 { 00:05:09.095 "filename": "/tmp/spdk_mem_dump.txt" 00:05:09.095 } 00:05:09.095 17:49:50 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.095 17:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:09.095 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:09.095 1 heaps totaling size 824.000000 MiB 00:05:09.095 size: 824.000000 MiB heap id: 0 00:05:09.095 end heaps---------- 00:05:09.095 9 mempools totaling size 603.782043 MiB 00:05:09.095 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:09.095 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:09.095 size: 100.555481 MiB name: bdev_io_58112 00:05:09.095 size: 50.003479 MiB name: msgpool_58112 00:05:09.095 size: 36.509338 MiB name: fsdev_io_58112 00:05:09.095 size: 21.763794 MiB name: PDU_Pool 00:05:09.095 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:09.095 size: 4.133484 MiB name: evtpool_58112 00:05:09.095 size: 0.026123 MiB name: Session_Pool 00:05:09.095 end mempools------- 00:05:09.095 6 memzones totaling size 4.142822 MiB 00:05:09.095 size: 1.000366 MiB name: RG_ring_0_58112 00:05:09.095 size: 1.000366 MiB name: RG_ring_1_58112 00:05:09.095 size: 1.000366 MiB name: RG_ring_4_58112 00:05:09.095 size: 1.000366 MiB name: RG_ring_5_58112 00:05:09.095 size: 0.125366 MiB name: RG_ring_2_58112 00:05:09.095 size: 0.015991 MiB name: RG_ring_3_58112 00:05:09.095 end memzones------- 00:05:09.095 17:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:09.356 heap id: 0 total size: 824.000000 MiB number of busy elements: 315 number of free elements: 18 00:05:09.356 list of free elements. size: 16.781372 MiB 00:05:09.356 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:09.356 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:09.356 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:09.356 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:09.356 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:09.356 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:09.356 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:09.356 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:09.356 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:09.356 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:09.356 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:09.356 element at address: 0x20001b400000 with size: 0.562927 MiB 00:05:09.356 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:09.356 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:09.356 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:09.356 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:09.356 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:09.356 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:09.356 list of standard malloc elements. size: 199.287720 MiB 00:05:09.356 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:09.356 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:09.356 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:09.356 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:09.356 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:09.356 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:09.356 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:09.356 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:09.356 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:09.356 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:09.356 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:09.356 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:09.356 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:09.356 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:09.356 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:09.357 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:09.357 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:09.358 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:09.358 list of memzone associated elements. size: 607.930908 MiB 00:05:09.358 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:09.358 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:09.358 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:09.358 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:09.358 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:09.358 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58112_0 00:05:09.358 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:09.358 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58112_0 00:05:09.358 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:09.358 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58112_0 00:05:09.358 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:09.358 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:09.358 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:09.358 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:09.358 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:09.358 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58112_0 00:05:09.358 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:09.358 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58112 00:05:09.358 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:09.358 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58112 00:05:09.358 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:09.358 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:09.358 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:09.358 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:09.358 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:09.358 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:09.358 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:09.358 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:09.358 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:09.358 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58112 00:05:09.358 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:09.358 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58112 00:05:09.358 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:09.358 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58112 00:05:09.358 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:09.358 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58112 00:05:09.358 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:09.358 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58112 00:05:09.358 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:09.358 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58112 00:05:09.358 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:09.358 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:09.358 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:09.358 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:09.358 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:09.358 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:09.358 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:09.358 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58112 00:05:09.358 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:09.358 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58112 00:05:09.358 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:09.358 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:09.358 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:09.358 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:09.358 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:09.358 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58112 00:05:09.358 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:09.358 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:09.358 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:09.358 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58112 00:05:09.358 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:09.358 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58112 00:05:09.358 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:09.358 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58112 00:05:09.358 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:09.358 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:09.358 17:49:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:09.358 17:49:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58112 00:05:09.358 17:49:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58112 ']' 00:05:09.358 17:49:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58112 00:05:09.358 17:49:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:09.358 17:49:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.358 17:49:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58112 00:05:09.358 17:49:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.358 17:49:51 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.358 killing process with pid 58112 00:05:09.358 17:49:51 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58112' 00:05:09.358 17:49:51 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58112 00:05:09.358 17:49:51 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58112 00:05:11.899 00:05:11.899 real 0m4.341s 00:05:11.899 user 0m4.184s 00:05:11.899 sys 0m0.692s 00:05:11.899 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.900 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:11.900 ************************************ 00:05:11.900 END TEST dpdk_mem_utility 00:05:11.900 ************************************ 00:05:11.900 17:49:53 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:11.900 17:49:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.900 17:49:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.900 17:49:53 -- common/autotest_common.sh@10 -- # set +x 00:05:11.900 ************************************ 00:05:11.900 START TEST event 00:05:11.900 ************************************ 00:05:11.900 17:49:53 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:11.900 * Looking for test storage... 00:05:11.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:11.900 17:49:53 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.900 17:49:53 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.900 17:49:53 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.165 17:49:53 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.165 17:49:53 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.165 17:49:53 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.165 17:49:53 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.165 17:49:53 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.165 17:49:53 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.165 17:49:53 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.165 17:49:53 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.165 17:49:53 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.165 17:49:53 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.165 17:49:53 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.165 17:49:53 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.165 17:49:53 event -- scripts/common.sh@344 -- # case "$op" in 00:05:12.165 17:49:53 event -- scripts/common.sh@345 -- # : 1 00:05:12.165 17:49:53 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.165 17:49:53 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.165 17:49:53 event -- scripts/common.sh@365 -- # decimal 1 00:05:12.165 17:49:53 event -- scripts/common.sh@353 -- # local d=1 00:05:12.165 17:49:53 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.165 17:49:53 event -- scripts/common.sh@355 -- # echo 1 00:05:12.165 17:49:53 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.165 17:49:53 event -- scripts/common.sh@366 -- # decimal 2 00:05:12.165 17:49:53 event -- scripts/common.sh@353 -- # local d=2 00:05:12.165 17:49:53 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.165 17:49:53 event -- scripts/common.sh@355 -- # echo 2 00:05:12.165 17:49:53 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.165 17:49:53 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.165 17:49:53 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.165 17:49:53 event -- scripts/common.sh@368 -- # return 0 00:05:12.165 17:49:53 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.166 17:49:53 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.166 --rc genhtml_branch_coverage=1 00:05:12.166 --rc genhtml_function_coverage=1 00:05:12.166 --rc genhtml_legend=1 00:05:12.166 --rc geninfo_all_blocks=1 00:05:12.166 --rc geninfo_unexecuted_blocks=1 00:05:12.166 00:05:12.166 ' 00:05:12.166 17:49:53 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.166 --rc genhtml_branch_coverage=1 00:05:12.166 --rc genhtml_function_coverage=1 00:05:12.166 --rc genhtml_legend=1 00:05:12.166 --rc geninfo_all_blocks=1 00:05:12.166 --rc geninfo_unexecuted_blocks=1 00:05:12.166 00:05:12.166 ' 00:05:12.166 17:49:53 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.166 --rc genhtml_branch_coverage=1 00:05:12.166 --rc genhtml_function_coverage=1 00:05:12.166 --rc genhtml_legend=1 00:05:12.166 --rc geninfo_all_blocks=1 00:05:12.166 --rc geninfo_unexecuted_blocks=1 00:05:12.166 00:05:12.166 ' 00:05:12.166 17:49:53 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.166 --rc genhtml_branch_coverage=1 00:05:12.166 --rc genhtml_function_coverage=1 00:05:12.166 --rc genhtml_legend=1 00:05:12.166 --rc geninfo_all_blocks=1 00:05:12.166 --rc geninfo_unexecuted_blocks=1 00:05:12.166 00:05:12.166 ' 00:05:12.166 17:49:53 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:12.166 17:49:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:12.166 17:49:53 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.166 17:49:53 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:12.166 17:49:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.166 17:49:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.166 ************************************ 00:05:12.166 START TEST event_perf 00:05:12.166 ************************************ 00:05:12.166 17:49:53 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.166 Running I/O for 1 seconds...[2024-11-26 17:49:53.883636] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:05:12.166 [2024-11-26 17:49:53.883785] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58220 ] 00:05:12.445 [2024-11-26 17:49:54.066355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.445 [2024-11-26 17:49:54.188110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.445 [2024-11-26 17:49:54.188307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.445 [2024-11-26 17:49:54.188371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.445 [2024-11-26 17:49:54.188383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.826 Running I/O for 1 seconds... 00:05:13.826 lcore 0: 83783 00:05:13.826 lcore 1: 83773 00:05:13.826 lcore 2: 83776 00:05:13.826 lcore 3: 83779 00:05:13.826 done. 00:05:13.826 00:05:13.826 real 0m1.617s 00:05:13.826 user 0m4.370s 00:05:13.826 sys 0m0.123s 00:05:13.826 17:49:55 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.826 17:49:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.826 ************************************ 00:05:13.826 END TEST event_perf 00:05:13.826 ************************************ 00:05:13.826 17:49:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:13.826 17:49:55 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:13.826 17:49:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.826 17:49:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.826 ************************************ 00:05:13.826 START TEST event_reactor 00:05:13.826 ************************************ 00:05:13.826 17:49:55 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:13.826 [2024-11-26 17:49:55.576151] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:05:13.826 [2024-11-26 17:49:55.576311] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58262 ] 00:05:14.085 [2024-11-26 17:49:55.762968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.085 [2024-11-26 17:49:55.891387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.465 test_start 00:05:15.465 oneshot 00:05:15.465 tick 100 00:05:15.465 tick 100 00:05:15.465 tick 250 00:05:15.465 tick 100 00:05:15.465 tick 100 00:05:15.465 tick 250 00:05:15.465 tick 100 00:05:15.465 tick 500 00:05:15.465 tick 100 00:05:15.465 tick 100 00:05:15.465 tick 250 00:05:15.465 tick 100 00:05:15.465 tick 100 00:05:15.465 test_end 00:05:15.465 00:05:15.465 real 0m1.631s 00:05:15.465 user 0m1.405s 00:05:15.465 sys 0m0.116s 00:05:15.465 17:49:57 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.465 17:49:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:15.465 ************************************ 00:05:15.465 END TEST event_reactor 00:05:15.465 ************************************ 00:05:15.465 17:49:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:15.465 17:49:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:15.465 17:49:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.465 17:49:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.465 ************************************ 00:05:15.465 START TEST event_reactor_perf 00:05:15.465 ************************************ 00:05:15.465 17:49:57 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:15.465 [2024-11-26 17:49:57.273280] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:05:15.465 [2024-11-26 17:49:57.273424] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58296 ] 00:05:15.725 [2024-11-26 17:49:57.451498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.725 [2024-11-26 17:49:57.584261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.104 test_start 00:05:17.104 test_end 00:05:17.104 Performance: 354130 events per second 00:05:17.104 00:05:17.104 real 0m1.593s 00:05:17.104 user 0m1.377s 00:05:17.104 sys 0m0.107s 00:05:17.104 17:49:58 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.104 17:49:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.104 ************************************ 00:05:17.104 END TEST event_reactor_perf 00:05:17.104 ************************************ 00:05:17.104 17:49:58 event -- event/event.sh@49 -- # uname -s 00:05:17.104 17:49:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:17.104 17:49:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:17.104 17:49:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.104 17:49:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.104 17:49:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.104 ************************************ 00:05:17.104 START TEST event_scheduler 00:05:17.105 ************************************ 00:05:17.105 17:49:58 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:17.365 * Looking for test storage... 00:05:17.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:17.365 17:49:58 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.365 17:49:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.365 17:49:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.365 17:49:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.365 17:49:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:17.365 17:49:59 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.365 17:49:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.365 --rc genhtml_branch_coverage=1 00:05:17.365 --rc genhtml_function_coverage=1 00:05:17.365 --rc genhtml_legend=1 00:05:17.365 --rc geninfo_all_blocks=1 00:05:17.365 --rc geninfo_unexecuted_blocks=1 00:05:17.365 00:05:17.365 ' 00:05:17.365 17:49:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.365 --rc genhtml_branch_coverage=1 00:05:17.365 --rc genhtml_function_coverage=1 00:05:17.365 --rc genhtml_legend=1 00:05:17.365 --rc geninfo_all_blocks=1 00:05:17.365 --rc geninfo_unexecuted_blocks=1 00:05:17.365 00:05:17.365 ' 00:05:17.365 17:49:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.365 --rc genhtml_branch_coverage=1 00:05:17.365 --rc genhtml_function_coverage=1 00:05:17.365 --rc genhtml_legend=1 00:05:17.365 --rc geninfo_all_blocks=1 00:05:17.365 --rc geninfo_unexecuted_blocks=1 00:05:17.365 00:05:17.365 ' 00:05:17.365 17:49:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.365 --rc genhtml_branch_coverage=1 00:05:17.365 --rc genhtml_function_coverage=1 00:05:17.365 --rc genhtml_legend=1 00:05:17.365 --rc geninfo_all_blocks=1 00:05:17.365 --rc geninfo_unexecuted_blocks=1 00:05:17.365 00:05:17.365 ' 00:05:17.365 17:49:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:17.365 17:49:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58372 00:05:17.365 17:49:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:17.365 17:49:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.365 17:49:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58372 00:05:17.365 17:49:59 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58372 ']' 00:05:17.365 17:49:59 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.365 17:49:59 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.365 17:49:59 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.365 17:49:59 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.365 17:49:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.625 [2024-11-26 17:49:59.228490] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:05:17.625 [2024-11-26 17:49:59.228714] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58372 ] 00:05:17.625 [2024-11-26 17:49:59.416118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.884 [2024-11-26 17:49:59.569255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.884 [2024-11-26 17:49:59.569404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.884 [2024-11-26 17:49:59.569547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.884 [2024-11-26 17:49:59.569585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.453 17:50:00 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.453 17:50:00 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:18.453 17:50:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:18.453 17:50:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.453 17:50:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.453 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:18.453 POWER: Cannot set governor of lcore 0 to userspace 00:05:18.453 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:18.453 POWER: Cannot set governor of lcore 0 to performance 00:05:18.453 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:18.453 POWER: Cannot set governor of lcore 0 to userspace 00:05:18.453 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:18.453 POWER: Cannot set governor of lcore 0 to userspace 00:05:18.453 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:18.453 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:18.453 POWER: Unable to set Power Management Environment for lcore 0 00:05:18.453 [2024-11-26 17:50:00.134345] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:18.453 [2024-11-26 17:50:00.134376] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:18.453 [2024-11-26 17:50:00.134389] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:18.453 [2024-11-26 17:50:00.134414] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:18.453 [2024-11-26 17:50:00.134424] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:18.453 [2024-11-26 17:50:00.134435] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:18.453 17:50:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.453 17:50:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:18.453 17:50:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.453 17:50:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.713 [2024-11-26 17:50:00.547761] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:18.713 17:50:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.713 17:50:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:18.713 17:50:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.713 17:50:00 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.713 17:50:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.713 ************************************ 00:05:18.713 START TEST scheduler_create_thread 00:05:18.713 ************************************ 00:05:18.713 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:18.713 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:18.713 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.713 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.973 2 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.973 3 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.973 4 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.973 5 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.973 6 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.973 7 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.973 8 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.973 9 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.973 10 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.973 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.354 17:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.355 17:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:20.355 17:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:20.355 17:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.355 17:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.295 17:50:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.295 17:50:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:21.295 17:50:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.295 17:50:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.865 17:50:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.865 17:50:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:21.865 17:50:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:21.865 17:50:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.865 17:50:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.804 17:50:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.804 00:05:22.804 real 0m3.887s 00:05:22.804 user 0m0.028s 00:05:22.804 sys 0m0.009s 00:05:22.804 17:50:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.804 17:50:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.804 ************************************ 00:05:22.804 END TEST scheduler_create_thread 00:05:22.804 ************************************ 00:05:22.804 17:50:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:22.804 17:50:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58372 00:05:22.804 17:50:04 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58372 ']' 00:05:22.804 17:50:04 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58372 00:05:22.804 17:50:04 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:22.804 17:50:04 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.804 17:50:04 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58372 00:05:22.804 17:50:04 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:22.804 17:50:04 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:22.804 killing process with pid 58372 00:05:22.804 17:50:04 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58372' 00:05:22.804 17:50:04 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58372 00:05:22.804 17:50:04 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58372 00:05:23.062 [2024-11-26 17:50:04.828174] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:24.444 00:05:24.444 real 0m7.262s 00:05:24.444 user 0m14.879s 00:05:24.444 sys 0m0.688s 00:05:24.444 17:50:06 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.444 17:50:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.444 ************************************ 00:05:24.444 END TEST event_scheduler 00:05:24.444 ************************************ 00:05:24.444 17:50:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:24.444 17:50:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:24.444 17:50:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.444 17:50:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.444 17:50:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.444 ************************************ 00:05:24.444 START TEST app_repeat 00:05:24.444 ************************************ 00:05:24.444 17:50:06 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:24.444 17:50:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.444 17:50:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.444 17:50:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:24.444 17:50:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.444 17:50:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:24.444 17:50:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:24.444 17:50:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:24.444 17:50:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58500 00:05:24.444 17:50:06 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:24.444 17:50:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.444 Process app_repeat pid: 58500 00:05:24.444 17:50:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58500' 00:05:24.444 17:50:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.444 spdk_app_start Round 0 00:05:24.444 17:50:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:24.444 17:50:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58500 /var/tmp/spdk-nbd.sock 00:05:24.444 17:50:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58500 ']' 00:05:24.444 17:50:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.444 17:50:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.444 17:50:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.444 17:50:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.444 17:50:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.444 [2024-11-26 17:50:06.282224] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:05:24.445 [2024-11-26 17:50:06.282404] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58500 ] 00:05:24.705 [2024-11-26 17:50:06.462538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.964 [2024-11-26 17:50:06.586924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.964 [2024-11-26 17:50:06.586964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.560 17:50:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.560 17:50:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:25.560 17:50:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.819 Malloc0 00:05:25.819 17:50:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.078 Malloc1 00:05:26.078 17:50:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.078 17:50:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.078 17:50:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.078 17:50:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.078 17:50:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.078 17:50:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.078 17:50:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.078 17:50:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.079 17:50:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.079 17:50:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.079 17:50:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.079 17:50:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.079 17:50:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.079 17:50:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.079 17:50:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.079 17:50:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.337 /dev/nbd0 00:05:26.337 17:50:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.337 17:50:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.337 17:50:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:26.337 17:50:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.337 17:50:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.337 17:50:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.337 17:50:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:26.337 17:50:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.337 17:50:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.337 17:50:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.337 17:50:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.337 1+0 records in 00:05:26.337 1+0 records out 00:05:26.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246314 s, 16.6 MB/s 00:05:26.337 17:50:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.337 17:50:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.337 17:50:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.337 17:50:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.337 17:50:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.337 17:50:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.337 17:50:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.338 17:50:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.597 /dev/nbd1 00:05:26.597 17:50:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.597 17:50:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.597 17:50:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:26.597 17:50:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.597 17:50:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.597 17:50:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.597 17:50:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:26.597 17:50:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.597 17:50:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.597 17:50:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.597 17:50:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.597 1+0 records in 00:05:26.597 1+0 records out 00:05:26.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327075 s, 12.5 MB/s 00:05:26.597 17:50:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.597 17:50:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.597 17:50:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.597 17:50:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.597 17:50:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.597 17:50:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.597 17:50:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.597 17:50:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.597 17:50:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.597 17:50:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.857 { 00:05:26.857 "nbd_device": "/dev/nbd0", 00:05:26.857 "bdev_name": "Malloc0" 00:05:26.857 }, 00:05:26.857 { 00:05:26.857 "nbd_device": "/dev/nbd1", 00:05:26.857 "bdev_name": "Malloc1" 00:05:26.857 } 00:05:26.857 ]' 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.857 { 00:05:26.857 "nbd_device": "/dev/nbd0", 00:05:26.857 "bdev_name": "Malloc0" 00:05:26.857 }, 00:05:26.857 { 00:05:26.857 "nbd_device": "/dev/nbd1", 00:05:26.857 "bdev_name": "Malloc1" 00:05:26.857 } 00:05:26.857 ]' 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.857 /dev/nbd1' 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.857 /dev/nbd1' 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.857 256+0 records in 00:05:26.857 256+0 records out 00:05:26.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147941 s, 70.9 MB/s 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.857 256+0 records in 00:05:26.857 256+0 records out 00:05:26.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282159 s, 37.2 MB/s 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.857 256+0 records in 00:05:26.857 256+0 records out 00:05:26.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307822 s, 34.1 MB/s 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.857 17:50:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.116 17:50:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.116 17:50:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.116 17:50:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.116 17:50:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.116 17:50:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.116 17:50:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.116 17:50:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.116 17:50:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.116 17:50:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.116 17:50:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.375 17:50:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.375 17:50:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.375 17:50:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.375 17:50:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.375 17:50:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.375 17:50:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.375 17:50:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.375 17:50:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.375 17:50:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.375 17:50:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.375 17:50:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.634 17:50:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.634 17:50:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.634 17:50:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.634 17:50:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.634 17:50:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.634 17:50:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.635 17:50:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.635 17:50:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.635 17:50:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.635 17:50:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.635 17:50:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.635 17:50:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.635 17:50:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.203 17:50:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:29.583 [2024-11-26 17:50:11.099335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.583 [2024-11-26 17:50:11.216825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.583 [2024-11-26 17:50:11.216828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.583 [2024-11-26 17:50:11.429443] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.583 [2024-11-26 17:50:11.429521] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.491 spdk_app_start Round 1 00:05:31.491 17:50:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:31.491 17:50:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:31.491 17:50:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58500 /var/tmp/spdk-nbd.sock 00:05:31.491 17:50:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58500 ']' 00:05:31.491 17:50:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.491 17:50:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.491 17:50:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.491 17:50:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.491 17:50:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.491 17:50:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.491 17:50:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:31.491 17:50:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.491 Malloc0 00:05:31.756 17:50:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.756 Malloc1 00:05:32.022 17:50:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.022 /dev/nbd0 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.022 17:50:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:32.022 17:50:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:32.022 17:50:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:32.022 17:50:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:32.022 17:50:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:32.022 17:50:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:32.022 17:50:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:32.022 17:50:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:32.022 17:50:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.022 1+0 records in 00:05:32.022 1+0 records out 00:05:32.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374918 s, 10.9 MB/s 00:05:32.022 17:50:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.022 17:50:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:32.022 17:50:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.022 17:50:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:32.022 17:50:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.022 17:50:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.282 /dev/nbd1 00:05:32.283 17:50:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.283 17:50:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.283 17:50:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:32.283 17:50:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:32.283 17:50:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:32.283 17:50:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:32.283 17:50:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:32.283 17:50:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:32.283 17:50:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:32.283 17:50:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:32.283 17:50:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.283 1+0 records in 00:05:32.283 1+0 records out 00:05:32.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464213 s, 8.8 MB/s 00:05:32.283 17:50:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.283 17:50:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:32.283 17:50:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.543 17:50:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:32.543 17:50:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:32.543 17:50:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.543 17:50:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.543 17:50:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.543 17:50:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.543 17:50:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.543 17:50:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.543 { 00:05:32.543 "nbd_device": "/dev/nbd0", 00:05:32.543 "bdev_name": "Malloc0" 00:05:32.543 }, 00:05:32.543 { 00:05:32.543 "nbd_device": "/dev/nbd1", 00:05:32.543 "bdev_name": "Malloc1" 00:05:32.543 } 00:05:32.543 ]' 00:05:32.543 17:50:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.543 17:50:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.543 { 00:05:32.543 "nbd_device": "/dev/nbd0", 00:05:32.543 "bdev_name": "Malloc0" 00:05:32.543 }, 00:05:32.543 { 00:05:32.543 "nbd_device": "/dev/nbd1", 00:05:32.543 "bdev_name": "Malloc1" 00:05:32.543 } 00:05:32.543 ]' 00:05:32.543 17:50:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.543 /dev/nbd1' 00:05:32.543 17:50:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.543 /dev/nbd1' 00:05:32.543 17:50:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.803 256+0 records in 00:05:32.803 256+0 records out 00:05:32.803 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00662224 s, 158 MB/s 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.803 256+0 records in 00:05:32.803 256+0 records out 00:05:32.803 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251622 s, 41.7 MB/s 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.803 256+0 records in 00:05:32.803 256+0 records out 00:05:32.803 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301489 s, 34.8 MB/s 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.803 17:50:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.804 17:50:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.804 17:50:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.804 17:50:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.804 17:50:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.804 17:50:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.804 17:50:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.804 17:50:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:32.804 17:50:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.804 17:50:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.063 17:50:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.063 17:50:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.063 17:50:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.063 17:50:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.063 17:50:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.063 17:50:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.063 17:50:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.063 17:50:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.063 17:50:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.063 17:50:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.323 17:50:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.323 17:50:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.323 17:50:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.323 17:50:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.323 17:50:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.323 17:50:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.323 17:50:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.323 17:50:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.323 17:50:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.323 17:50:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.323 17:50:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.582 17:50:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.582 17:50:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.582 17:50:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.582 17:50:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.582 17:50:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.582 17:50:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.582 17:50:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:33.582 17:50:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.582 17:50:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.582 17:50:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.582 17:50:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.582 17:50:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.582 17:50:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.152 17:50:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.091 [2024-11-26 17:50:16.935561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.350 [2024-11-26 17:50:17.057820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.350 [2024-11-26 17:50:17.057852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.609 [2024-11-26 17:50:17.261538] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.609 [2024-11-26 17:50:17.261630] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.987 spdk_app_start Round 2 00:05:36.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.987 17:50:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.987 17:50:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:36.987 17:50:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58500 /var/tmp/spdk-nbd.sock 00:05:36.987 17:50:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58500 ']' 00:05:36.987 17:50:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.987 17:50:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.987 17:50:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.987 17:50:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.987 17:50:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.248 17:50:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.248 17:50:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:37.248 17:50:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.510 Malloc0 00:05:37.510 17:50:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.771 Malloc1 00:05:37.771 17:50:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.771 17:50:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.771 17:50:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.771 17:50:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.771 17:50:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.771 17:50:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.771 17:50:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.771 17:50:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.771 17:50:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.771 17:50:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.771 17:50:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.771 17:50:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.771 17:50:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.771 17:50:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.771 17:50:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.771 17:50:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.031 /dev/nbd0 00:05:38.031 17:50:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.031 17:50:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.031 17:50:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:38.031 17:50:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.031 17:50:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.031 17:50:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.031 17:50:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:38.031 17:50:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.031 17:50:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.031 17:50:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.031 17:50:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.031 1+0 records in 00:05:38.031 1+0 records out 00:05:38.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268704 s, 15.2 MB/s 00:05:38.031 17:50:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.031 17:50:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.031 17:50:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.031 17:50:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.031 17:50:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.031 17:50:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.031 17:50:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.031 17:50:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.290 /dev/nbd1 00:05:38.290 17:50:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.290 17:50:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.290 17:50:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:38.290 17:50:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.290 17:50:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.290 17:50:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.290 17:50:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:38.290 17:50:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.290 17:50:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.290 17:50:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.290 17:50:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.290 1+0 records in 00:05:38.290 1+0 records out 00:05:38.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390594 s, 10.5 MB/s 00:05:38.290 17:50:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.290 17:50:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.290 17:50:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.290 17:50:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.290 17:50:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.290 17:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.290 17:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.290 17:50:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.290 17:50:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.548 17:50:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.548 17:50:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.548 { 00:05:38.548 "nbd_device": "/dev/nbd0", 00:05:38.548 "bdev_name": "Malloc0" 00:05:38.548 }, 00:05:38.548 { 00:05:38.548 "nbd_device": "/dev/nbd1", 00:05:38.548 "bdev_name": "Malloc1" 00:05:38.548 } 00:05:38.548 ]' 00:05:38.548 17:50:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.548 { 00:05:38.548 "nbd_device": "/dev/nbd0", 00:05:38.548 "bdev_name": "Malloc0" 00:05:38.548 }, 00:05:38.548 { 00:05:38.548 "nbd_device": "/dev/nbd1", 00:05:38.548 "bdev_name": "Malloc1" 00:05:38.548 } 00:05:38.548 ]' 00:05:38.548 17:50:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.807 /dev/nbd1' 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.807 /dev/nbd1' 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.807 256+0 records in 00:05:38.807 256+0 records out 00:05:38.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014229 s, 73.7 MB/s 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.807 256+0 records in 00:05:38.807 256+0 records out 00:05:38.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316624 s, 33.1 MB/s 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.807 256+0 records in 00:05:38.807 256+0 records out 00:05:38.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0383562 s, 27.3 MB/s 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.807 17:50:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.067 17:50:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.067 17:50:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.067 17:50:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.067 17:50:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.067 17:50:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.067 17:50:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.067 17:50:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.067 17:50:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.067 17:50:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.067 17:50:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.326 17:50:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.326 17:50:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.326 17:50:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.326 17:50:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.326 17:50:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.326 17:50:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.326 17:50:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.326 17:50:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.326 17:50:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.326 17:50:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.326 17:50:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.585 17:50:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.585 17:50:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.585 17:50:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.585 17:50:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.585 17:50:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.585 17:50:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.585 17:50:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.585 17:50:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.585 17:50:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.585 17:50:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.585 17:50:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.585 17:50:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.585 17:50:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.153 17:50:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.529 [2024-11-26 17:50:23.065990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.529 [2024-11-26 17:50:23.198743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.529 [2024-11-26 17:50:23.198742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.787 [2024-11-26 17:50:23.422828] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.787 [2024-11-26 17:50:23.422923] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.163 17:50:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58500 /var/tmp/spdk-nbd.sock 00:05:43.163 17:50:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58500 ']' 00:05:43.163 17:50:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.163 17:50:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.163 17:50:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.163 17:50:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.163 17:50:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.421 17:50:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.421 17:50:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:43.421 17:50:25 event.app_repeat -- event/event.sh@39 -- # killprocess 58500 00:05:43.421 17:50:25 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58500 ']' 00:05:43.421 17:50:25 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58500 00:05:43.421 17:50:25 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:43.421 17:50:25 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.421 17:50:25 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58500 00:05:43.421 17:50:25 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.421 17:50:25 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.422 17:50:25 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58500' 00:05:43.422 killing process with pid 58500 00:05:43.422 17:50:25 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58500 00:05:43.422 17:50:25 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58500 00:05:44.796 spdk_app_start is called in Round 0. 00:05:44.796 Shutdown signal received, stop current app iteration 00:05:44.796 Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 reinitialization... 00:05:44.796 spdk_app_start is called in Round 1. 00:05:44.796 Shutdown signal received, stop current app iteration 00:05:44.796 Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 reinitialization... 00:05:44.796 spdk_app_start is called in Round 2. 00:05:44.796 Shutdown signal received, stop current app iteration 00:05:44.796 Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 reinitialization... 00:05:44.796 spdk_app_start is called in Round 3. 00:05:44.796 Shutdown signal received, stop current app iteration 00:05:44.796 17:50:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:44.796 17:50:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:44.796 00:05:44.796 real 0m20.105s 00:05:44.796 user 0m43.297s 00:05:44.796 sys 0m2.745s 00:05:44.796 17:50:26 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.796 17:50:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.796 ************************************ 00:05:44.796 END TEST app_repeat 00:05:44.796 ************************************ 00:05:44.796 17:50:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:44.796 17:50:26 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:44.796 17:50:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.796 17:50:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.796 17:50:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.796 ************************************ 00:05:44.796 START TEST cpu_locks 00:05:44.796 ************************************ 00:05:44.796 17:50:26 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:44.796 * Looking for test storage... 00:05:44.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:44.796 17:50:26 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.796 17:50:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.796 17:50:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.796 17:50:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.796 17:50:26 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:44.796 17:50:26 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.796 17:50:26 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.796 --rc genhtml_branch_coverage=1 00:05:44.796 --rc genhtml_function_coverage=1 00:05:44.796 --rc genhtml_legend=1 00:05:44.796 --rc geninfo_all_blocks=1 00:05:44.796 --rc geninfo_unexecuted_blocks=1 00:05:44.796 00:05:44.796 ' 00:05:44.796 17:50:26 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.796 --rc genhtml_branch_coverage=1 00:05:44.796 --rc genhtml_function_coverage=1 00:05:44.796 --rc genhtml_legend=1 00:05:44.796 --rc geninfo_all_blocks=1 00:05:44.796 --rc geninfo_unexecuted_blocks=1 00:05:44.796 00:05:44.796 ' 00:05:44.796 17:50:26 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.796 --rc genhtml_branch_coverage=1 00:05:44.796 --rc genhtml_function_coverage=1 00:05:44.796 --rc genhtml_legend=1 00:05:44.796 --rc geninfo_all_blocks=1 00:05:44.796 --rc geninfo_unexecuted_blocks=1 00:05:44.796 00:05:44.796 ' 00:05:44.796 17:50:26 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.796 --rc genhtml_branch_coverage=1 00:05:44.796 --rc genhtml_function_coverage=1 00:05:44.796 --rc genhtml_legend=1 00:05:44.796 --rc geninfo_all_blocks=1 00:05:44.796 --rc geninfo_unexecuted_blocks=1 00:05:44.796 00:05:44.796 ' 00:05:44.796 17:50:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:44.796 17:50:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:44.796 17:50:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:44.796 17:50:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:44.796 17:50:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.796 17:50:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.796 17:50:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.796 ************************************ 00:05:44.796 START TEST default_locks 00:05:44.796 ************************************ 00:05:44.796 17:50:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:44.796 17:50:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58955 00:05:44.796 17:50:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.796 17:50:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58955 00:05:44.796 17:50:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58955 ']' 00:05:44.796 17:50:26 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.796 17:50:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.796 17:50:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.796 17:50:26 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.796 17:50:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.084 [2024-11-26 17:50:26.750761] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:05:45.084 [2024-11-26 17:50:26.750943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58955 ] 00:05:45.084 [2024-11-26 17:50:26.936009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.344 [2024-11-26 17:50:27.073655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.281 17:50:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.281 17:50:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:46.281 17:50:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58955 00:05:46.281 17:50:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58955 00:05:46.281 17:50:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.540 17:50:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58955 00:05:46.540 17:50:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58955 ']' 00:05:46.540 17:50:28 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58955 00:05:46.540 17:50:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:46.540 17:50:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.540 17:50:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58955 00:05:46.540 killing process with pid 58955 00:05:46.540 17:50:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.540 17:50:28 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.540 17:50:28 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58955' 00:05:46.540 17:50:28 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58955 00:05:46.540 17:50:28 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58955 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58955 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58955 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58955 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58955 ']' 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.833 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58955) - No such process 00:05:49.833 ERROR: process (pid: 58955) is no longer running 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.833 17:50:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.833 00:05:49.833 real 0m4.579s 00:05:49.833 user 0m4.540s 00:05:49.833 sys 0m0.667s 00:05:49.834 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.834 17:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.834 ************************************ 00:05:49.834 END TEST default_locks 00:05:49.834 ************************************ 00:05:49.834 17:50:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:49.834 17:50:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.834 17:50:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.834 17:50:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.834 ************************************ 00:05:49.834 START TEST default_locks_via_rpc 00:05:49.834 ************************************ 00:05:49.834 17:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:49.834 17:50:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59030 00:05:49.834 17:50:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.834 17:50:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59030 00:05:49.834 17:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59030 ']' 00:05:49.834 17:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.834 17:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.834 17:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.834 17:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.834 17:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.834 [2024-11-26 17:50:31.376959] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:05:49.834 [2024-11-26 17:50:31.377104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59030 ] 00:05:49.834 [2024-11-26 17:50:31.538594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.834 [2024-11-26 17:50:31.675291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59030 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59030 00:05:51.213 17:50:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.472 17:50:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59030 00:05:51.472 17:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59030 ']' 00:05:51.472 17:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59030 00:05:51.472 17:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:51.472 17:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.472 17:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59030 00:05:51.472 killing process with pid 59030 00:05:51.472 17:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.472 17:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.472 17:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59030' 00:05:51.472 17:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59030 00:05:51.472 17:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59030 00:05:54.012 00:05:54.012 real 0m4.483s 00:05:54.012 user 0m4.477s 00:05:54.012 sys 0m0.669s 00:05:54.012 17:50:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.012 17:50:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.012 ************************************ 00:05:54.012 END TEST default_locks_via_rpc 00:05:54.012 ************************************ 00:05:54.012 17:50:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:54.012 17:50:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.012 17:50:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.012 17:50:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.012 ************************************ 00:05:54.012 START TEST non_locking_app_on_locked_coremask 00:05:54.012 ************************************ 00:05:54.012 17:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:54.012 17:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59114 00:05:54.012 17:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.012 17:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59114 /var/tmp/spdk.sock 00:05:54.012 17:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59114 ']' 00:05:54.012 17:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.012 17:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.012 17:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.012 17:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.012 17:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.272 [2024-11-26 17:50:35.926312] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:05:54.272 [2024-11-26 17:50:35.926431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59114 ] 00:05:54.272 [2024-11-26 17:50:36.100513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.531 [2024-11-26 17:50:36.220410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.468 17:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.468 17:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.468 17:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:55.468 17:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59131 00:05:55.468 17:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59131 /var/tmp/spdk2.sock 00:05:55.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.468 17:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59131 ']' 00:05:55.468 17:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.468 17:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.468 17:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.468 17:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.468 17:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.468 [2024-11-26 17:50:37.225536] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:05:55.468 [2024-11-26 17:50:37.225686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59131 ] 00:05:55.728 [2024-11-26 17:50:37.406408] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.728 [2024-11-26 17:50:37.406499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.987 [2024-11-26 17:50:37.653678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.529 17:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.529 17:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:58.529 17:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59114 00:05:58.529 17:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59114 00:05:58.529 17:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.529 17:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59114 00:05:58.529 17:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59114 ']' 00:05:58.529 17:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59114 00:05:58.529 17:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:58.529 17:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.529 17:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59114 00:05:58.529 17:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.529 17:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.529 17:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59114' 00:05:58.529 killing process with pid 59114 00:05:58.529 17:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59114 00:05:58.529 17:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59114 00:06:03.808 17:50:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59131 00:06:03.808 17:50:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59131 ']' 00:06:03.808 17:50:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59131 00:06:03.808 17:50:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.808 17:50:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.808 17:50:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59131 00:06:03.808 killing process with pid 59131 00:06:03.808 17:50:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.808 17:50:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.808 17:50:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59131' 00:06:03.808 17:50:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59131 00:06:03.808 17:50:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59131 00:06:07.098 00:06:07.098 real 0m12.479s 00:06:07.098 user 0m12.700s 00:06:07.098 sys 0m1.349s 00:06:07.098 17:50:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.098 17:50:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.098 ************************************ 00:06:07.098 END TEST non_locking_app_on_locked_coremask 00:06:07.098 ************************************ 00:06:07.099 17:50:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:07.099 17:50:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.099 17:50:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.099 17:50:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.099 ************************************ 00:06:07.099 START TEST locking_app_on_unlocked_coremask 00:06:07.099 ************************************ 00:06:07.099 17:50:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:07.099 17:50:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59288 00:06:07.099 17:50:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:07.099 17:50:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59288 /var/tmp/spdk.sock 00:06:07.099 17:50:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59288 ']' 00:06:07.099 17:50:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.099 17:50:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.099 17:50:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.099 17:50:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.099 17:50:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.099 [2024-11-26 17:50:48.493357] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:06:07.099 [2024-11-26 17:50:48.493532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59288 ] 00:06:07.099 [2024-11-26 17:50:48.675715] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.099 [2024-11-26 17:50:48.675799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.099 [2024-11-26 17:50:48.813447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.043 17:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.043 17:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.043 17:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59309 00:06:08.043 17:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59309 /var/tmp/spdk2.sock 00:06:08.043 17:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59309 ']' 00:06:08.043 17:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.043 17:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.043 17:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.043 17:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.043 17:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.043 17:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:08.043 [2024-11-26 17:50:49.824085] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:06:08.043 [2024-11-26 17:50:49.824278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59309 ] 00:06:08.303 [2024-11-26 17:50:49.998764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.563 [2024-11-26 17:50:50.260336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.100 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.100 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.100 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59309 00:06:11.100 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59309 00:06:11.100 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.100 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59288 00:06:11.100 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59288 ']' 00:06:11.100 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59288 00:06:11.100 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.100 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.100 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59288 00:06:11.100 killing process with pid 59288 00:06:11.101 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.101 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.101 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59288' 00:06:11.101 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59288 00:06:11.101 17:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59288 00:06:17.667 17:50:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59309 00:06:17.667 17:50:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59309 ']' 00:06:17.667 17:50:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59309 00:06:17.667 17:50:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.667 17:50:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.667 17:50:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59309 00:06:17.667 17:50:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.667 17:50:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.667 17:50:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59309' 00:06:17.667 killing process with pid 59309 00:06:17.667 17:50:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59309 00:06:17.667 17:50:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59309 00:06:19.615 00:06:19.615 real 0m12.689s 00:06:19.615 user 0m12.912s 00:06:19.615 sys 0m1.344s 00:06:19.615 ************************************ 00:06:19.615 END TEST locking_app_on_unlocked_coremask 00:06:19.615 ************************************ 00:06:19.615 17:51:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.615 17:51:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.615 17:51:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:19.615 17:51:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.615 17:51:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.615 17:51:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.615 ************************************ 00:06:19.615 START TEST locking_app_on_locked_coremask 00:06:19.615 ************************************ 00:06:19.615 17:51:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:19.615 17:51:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59463 00:06:19.615 17:51:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.615 17:51:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59463 /var/tmp/spdk.sock 00:06:19.615 17:51:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59463 ']' 00:06:19.615 17:51:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.615 17:51:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.615 17:51:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.615 17:51:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.615 17:51:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.615 [2024-11-26 17:51:01.244431] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:06:19.615 [2024-11-26 17:51:01.244705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59463 ] 00:06:19.615 [2024-11-26 17:51:01.426861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.874 [2024-11-26 17:51:01.554443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59490 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59490 /var/tmp/spdk2.sock 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59490 /var/tmp/spdk2.sock 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59490 /var/tmp/spdk2.sock 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59490 ']' 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.811 17:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.811 [2024-11-26 17:51:02.575620] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:06:20.811 [2024-11-26 17:51:02.575853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59490 ] 00:06:21.072 [2024-11-26 17:51:02.751656] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59463 has claimed it. 00:06:21.073 [2024-11-26 17:51:02.751755] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.668 ERROR: process (pid: 59490) is no longer running 00:06:21.668 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59490) - No such process 00:06:21.668 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.668 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:21.668 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:21.668 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.669 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.669 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.669 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59463 00:06:21.669 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59463 00:06:21.669 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.928 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59463 00:06:21.928 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59463 ']' 00:06:21.928 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59463 00:06:21.928 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.928 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.928 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59463 00:06:21.928 killing process with pid 59463 00:06:21.928 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.928 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.928 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59463' 00:06:21.928 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59463 00:06:21.928 17:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59463 00:06:25.213 00:06:25.213 real 0m5.213s 00:06:25.213 user 0m5.391s 00:06:25.213 sys 0m0.799s 00:06:25.213 17:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.213 17:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.213 ************************************ 00:06:25.213 END TEST locking_app_on_locked_coremask 00:06:25.213 ************************************ 00:06:25.213 17:51:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:25.213 17:51:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.213 17:51:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.213 17:51:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.213 ************************************ 00:06:25.213 START TEST locking_overlapped_coremask 00:06:25.213 ************************************ 00:06:25.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.213 17:51:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:25.213 17:51:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59557 00:06:25.213 17:51:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59557 /var/tmp/spdk.sock 00:06:25.213 17:51:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59557 ']' 00:06:25.213 17:51:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.213 17:51:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.213 17:51:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.213 17:51:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:25.213 17:51:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.213 17:51:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.213 [2024-11-26 17:51:06.511137] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:06:25.213 [2024-11-26 17:51:06.511283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59557 ] 00:06:25.213 [2024-11-26 17:51:06.691558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.213 [2024-11-26 17:51:06.834702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.213 [2024-11-26 17:51:06.834621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.213 [2024-11-26 17:51:06.834732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.162 17:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.162 17:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:26.162 17:51:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59586 00:06:26.162 17:51:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59586 /var/tmp/spdk2.sock 00:06:26.163 17:51:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.163 17:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:26.163 17:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59586 /var/tmp/spdk2.sock 00:06:26.163 17:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:26.163 17:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.163 17:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:26.163 17:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.163 17:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59586 /var/tmp/spdk2.sock 00:06:26.163 17:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59586 ']' 00:06:26.163 17:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.163 17:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.163 17:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.163 17:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.163 17:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.163 [2024-11-26 17:51:07.937787] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:06:26.163 [2024-11-26 17:51:07.938010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59586 ] 00:06:26.422 [2024-11-26 17:51:08.119903] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59557 has claimed it. 00:06:26.422 [2024-11-26 17:51:08.124080] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:26.682 ERROR: process (pid: 59586) is no longer running 00:06:26.682 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59586) - No such process 00:06:26.682 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.682 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:26.682 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:26.682 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.682 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.682 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.682 17:51:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:26.682 17:51:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:26.682 17:51:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:26.682 17:51:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:26.682 17:51:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59557 00:06:26.682 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59557 ']' 00:06:26.941 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59557 00:06:26.941 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:26.941 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.941 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59557 00:06:26.942 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.942 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.942 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59557' 00:06:26.942 killing process with pid 59557 00:06:26.942 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59557 00:06:26.942 17:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59557 00:06:29.472 00:06:29.472 real 0m4.902s 00:06:29.472 user 0m13.386s 00:06:29.472 sys 0m0.664s 00:06:29.472 17:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.472 17:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.472 ************************************ 00:06:29.472 END TEST locking_overlapped_coremask 00:06:29.472 ************************************ 00:06:29.731 17:51:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:29.731 17:51:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.731 17:51:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.731 17:51:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.731 ************************************ 00:06:29.731 START TEST locking_overlapped_coremask_via_rpc 00:06:29.731 ************************************ 00:06:29.731 17:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:29.731 17:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59650 00:06:29.731 17:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:29.731 17:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59650 /var/tmp/spdk.sock 00:06:29.731 17:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59650 ']' 00:06:29.731 17:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.731 17:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.731 17:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.731 17:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.731 17:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.731 [2024-11-26 17:51:11.493364] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:06:29.731 [2024-11-26 17:51:11.494007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59650 ] 00:06:29.999 [2024-11-26 17:51:11.666231] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.999 [2024-11-26 17:51:11.666290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.999 [2024-11-26 17:51:11.804734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.999 [2024-11-26 17:51:11.804851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.999 [2024-11-26 17:51:11.804926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.957 17:51:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.957 17:51:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.957 17:51:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59674 00:06:30.957 17:51:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:30.957 17:51:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59674 /var/tmp/spdk2.sock 00:06:30.957 17:51:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59674 ']' 00:06:30.957 17:51:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.957 17:51:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.957 17:51:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.957 17:51:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.957 17:51:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.216 [2024-11-26 17:51:12.853496] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:06:31.216 [2024-11-26 17:51:12.853712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59674 ] 00:06:31.216 [2024-11-26 17:51:13.030661] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.216 [2024-11-26 17:51:13.030719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.475 [2024-11-26 17:51:13.287467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.475 [2024-11-26 17:51:13.291144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.475 [2024-11-26 17:51:13.291181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:34.007 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.007 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.007 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.007 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.007 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.007 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.007 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.007 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.008 [2024-11-26 17:51:15.463246] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59650 has claimed it. 00:06:34.008 request: 00:06:34.008 { 00:06:34.008 "method": "framework_enable_cpumask_locks", 00:06:34.008 "req_id": 1 00:06:34.008 } 00:06:34.008 Got JSON-RPC error response 00:06:34.008 response: 00:06:34.008 { 00:06:34.008 "code": -32603, 00:06:34.008 "message": "Failed to claim CPU core: 2" 00:06:34.008 } 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59650 /var/tmp/spdk.sock 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59650 ']' 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59674 /var/tmp/spdk2.sock 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59674 ']' 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.008 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.267 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.267 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.267 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:34.267 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.267 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.267 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.267 00:06:34.267 real 0m4.542s 00:06:34.267 user 0m1.399s 00:06:34.267 sys 0m0.199s 00:06:34.267 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.267 17:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.267 ************************************ 00:06:34.267 END TEST locking_overlapped_coremask_via_rpc 00:06:34.267 ************************************ 00:06:34.267 17:51:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:34.267 17:51:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59650 ]] 00:06:34.267 17:51:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59650 00:06:34.267 17:51:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59650 ']' 00:06:34.267 17:51:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59650 00:06:34.267 17:51:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:34.268 17:51:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.268 17:51:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59650 00:06:34.268 17:51:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.268 17:51:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.268 17:51:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59650' 00:06:34.268 killing process with pid 59650 00:06:34.268 17:51:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59650 00:06:34.268 17:51:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59650 00:06:36.800 17:51:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59674 ]] 00:06:36.800 17:51:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59674 00:06:36.800 17:51:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59674 ']' 00:06:36.800 17:51:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59674 00:06:36.800 17:51:18 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:36.800 17:51:18 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.800 17:51:18 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59674 00:06:36.800 17:51:18 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:36.800 17:51:18 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:36.800 killing process with pid 59674 00:06:36.800 17:51:18 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59674' 00:06:36.800 17:51:18 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59674 00:06:36.800 17:51:18 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59674 00:06:39.336 17:51:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.336 17:51:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:39.336 17:51:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59650 ]] 00:06:39.336 17:51:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59650 00:06:39.336 17:51:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59650 ']' 00:06:39.336 Process with pid 59650 is not found 00:06:39.336 17:51:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59650 00:06:39.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59650) - No such process 00:06:39.336 17:51:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59650 is not found' 00:06:39.336 17:51:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59674 ]] 00:06:39.336 17:51:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59674 00:06:39.336 Process with pid 59674 is not found 00:06:39.336 17:51:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59674 ']' 00:06:39.336 17:51:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59674 00:06:39.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59674) - No such process 00:06:39.336 17:51:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59674 is not found' 00:06:39.336 17:51:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.336 ************************************ 00:06:39.336 END TEST cpu_locks 00:06:39.336 ************************************ 00:06:39.336 00:06:39.336 real 0m54.723s 00:06:39.336 user 1m32.133s 00:06:39.336 sys 0m6.985s 00:06:39.336 17:51:21 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.336 17:51:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.336 ************************************ 00:06:39.336 END TEST event 00:06:39.336 ************************************ 00:06:39.336 00:06:39.336 real 1m27.553s 00:06:39.336 user 2m37.707s 00:06:39.336 sys 0m11.147s 00:06:39.336 17:51:21 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.336 17:51:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.596 17:51:21 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:39.596 17:51:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.596 17:51:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.596 17:51:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.596 ************************************ 00:06:39.596 START TEST thread 00:06:39.596 ************************************ 00:06:39.596 17:51:21 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:39.596 * Looking for test storage... 00:06:39.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:39.596 17:51:21 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.596 17:51:21 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.596 17:51:21 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.596 17:51:21 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.596 17:51:21 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.596 17:51:21 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.596 17:51:21 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.596 17:51:21 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.596 17:51:21 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.596 17:51:21 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.596 17:51:21 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.596 17:51:21 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.596 17:51:21 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.596 17:51:21 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.596 17:51:21 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.596 17:51:21 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:39.596 17:51:21 thread -- scripts/common.sh@345 -- # : 1 00:06:39.596 17:51:21 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.596 17:51:21 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.596 17:51:21 thread -- scripts/common.sh@365 -- # decimal 1 00:06:39.596 17:51:21 thread -- scripts/common.sh@353 -- # local d=1 00:06:39.596 17:51:21 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.596 17:51:21 thread -- scripts/common.sh@355 -- # echo 1 00:06:39.596 17:51:21 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.596 17:51:21 thread -- scripts/common.sh@366 -- # decimal 2 00:06:39.596 17:51:21 thread -- scripts/common.sh@353 -- # local d=2 00:06:39.596 17:51:21 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.596 17:51:21 thread -- scripts/common.sh@355 -- # echo 2 00:06:39.858 17:51:21 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.858 17:51:21 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.858 17:51:21 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.858 17:51:21 thread -- scripts/common.sh@368 -- # return 0 00:06:39.858 17:51:21 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.858 17:51:21 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.858 --rc genhtml_branch_coverage=1 00:06:39.858 --rc genhtml_function_coverage=1 00:06:39.858 --rc genhtml_legend=1 00:06:39.858 --rc geninfo_all_blocks=1 00:06:39.858 --rc geninfo_unexecuted_blocks=1 00:06:39.858 00:06:39.858 ' 00:06:39.858 17:51:21 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.858 --rc genhtml_branch_coverage=1 00:06:39.858 --rc genhtml_function_coverage=1 00:06:39.858 --rc genhtml_legend=1 00:06:39.858 --rc geninfo_all_blocks=1 00:06:39.858 --rc geninfo_unexecuted_blocks=1 00:06:39.858 00:06:39.858 ' 00:06:39.858 17:51:21 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.858 --rc genhtml_branch_coverage=1 00:06:39.858 --rc genhtml_function_coverage=1 00:06:39.858 --rc genhtml_legend=1 00:06:39.858 --rc geninfo_all_blocks=1 00:06:39.858 --rc geninfo_unexecuted_blocks=1 00:06:39.858 00:06:39.858 ' 00:06:39.858 17:51:21 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.858 --rc genhtml_branch_coverage=1 00:06:39.858 --rc genhtml_function_coverage=1 00:06:39.858 --rc genhtml_legend=1 00:06:39.858 --rc geninfo_all_blocks=1 00:06:39.858 --rc geninfo_unexecuted_blocks=1 00:06:39.858 00:06:39.858 ' 00:06:39.858 17:51:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.858 17:51:21 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:39.858 17:51:21 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.858 17:51:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.858 ************************************ 00:06:39.858 START TEST thread_poller_perf 00:06:39.858 ************************************ 00:06:39.858 17:51:21 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.858 [2024-11-26 17:51:21.523318] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:06:39.858 [2024-11-26 17:51:21.523505] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59869 ] 00:06:39.858 [2024-11-26 17:51:21.696268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.118 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:40.118 [2024-11-26 17:51:21.816956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.575 [2024-11-26T17:51:23.438Z] ====================================== 00:06:41.575 [2024-11-26T17:51:23.438Z] busy:2299505388 (cyc) 00:06:41.575 [2024-11-26T17:51:23.438Z] total_run_count: 390000 00:06:41.575 [2024-11-26T17:51:23.438Z] tsc_hz: 2290000000 (cyc) 00:06:41.575 [2024-11-26T17:51:23.438Z] ====================================== 00:06:41.575 [2024-11-26T17:51:23.438Z] poller_cost: 5896 (cyc), 2574 (nsec) 00:06:41.575 ************************************ 00:06:41.575 END TEST thread_poller_perf 00:06:41.575 ************************************ 00:06:41.575 00:06:41.575 real 0m1.575s 00:06:41.575 user 0m1.371s 00:06:41.575 sys 0m0.098s 00:06:41.575 17:51:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.575 17:51:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.575 17:51:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.575 17:51:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:41.575 17:51:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.575 17:51:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.575 ************************************ 00:06:41.575 START TEST thread_poller_perf 00:06:41.575 ************************************ 00:06:41.575 17:51:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.575 [2024-11-26 17:51:23.162557] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:06:41.575 [2024-11-26 17:51:23.162666] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59911 ] 00:06:41.575 [2024-11-26 17:51:23.339790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.852 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:41.852 [2024-11-26 17:51:23.460891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.232 [2024-11-26T17:51:25.095Z] ====================================== 00:06:43.232 [2024-11-26T17:51:25.095Z] busy:2293628824 (cyc) 00:06:43.232 [2024-11-26T17:51:25.095Z] total_run_count: 4999000 00:06:43.232 [2024-11-26T17:51:25.095Z] tsc_hz: 2290000000 (cyc) 00:06:43.232 [2024-11-26T17:51:25.095Z] ====================================== 00:06:43.232 [2024-11-26T17:51:25.095Z] poller_cost: 458 (cyc), 200 (nsec) 00:06:43.232 00:06:43.232 real 0m1.583s 00:06:43.232 user 0m1.376s 00:06:43.232 sys 0m0.100s 00:06:43.232 ************************************ 00:06:43.232 END TEST thread_poller_perf 00:06:43.232 ************************************ 00:06:43.232 17:51:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.232 17:51:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.232 17:51:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:43.232 00:06:43.232 real 0m3.525s 00:06:43.232 user 0m2.902s 00:06:43.232 sys 0m0.425s 00:06:43.232 17:51:24 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.232 17:51:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.232 ************************************ 00:06:43.232 END TEST thread 00:06:43.232 ************************************ 00:06:43.232 17:51:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:43.232 17:51:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:43.232 17:51:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.232 17:51:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.232 17:51:24 -- common/autotest_common.sh@10 -- # set +x 00:06:43.232 ************************************ 00:06:43.232 START TEST app_cmdline 00:06:43.232 ************************************ 00:06:43.232 17:51:24 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:43.232 * Looking for test storage... 00:06:43.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:43.233 17:51:24 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.233 17:51:24 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.233 17:51:24 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.233 17:51:25 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.233 17:51:25 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:43.233 17:51:25 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.233 17:51:25 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.233 --rc genhtml_branch_coverage=1 00:06:43.233 --rc genhtml_function_coverage=1 00:06:43.233 --rc genhtml_legend=1 00:06:43.233 --rc geninfo_all_blocks=1 00:06:43.233 --rc geninfo_unexecuted_blocks=1 00:06:43.233 00:06:43.233 ' 00:06:43.233 17:51:25 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.233 --rc genhtml_branch_coverage=1 00:06:43.233 --rc genhtml_function_coverage=1 00:06:43.233 --rc genhtml_legend=1 00:06:43.233 --rc geninfo_all_blocks=1 00:06:43.233 --rc geninfo_unexecuted_blocks=1 00:06:43.233 00:06:43.233 ' 00:06:43.233 17:51:25 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.233 --rc genhtml_branch_coverage=1 00:06:43.233 --rc genhtml_function_coverage=1 00:06:43.233 --rc genhtml_legend=1 00:06:43.233 --rc geninfo_all_blocks=1 00:06:43.233 --rc geninfo_unexecuted_blocks=1 00:06:43.233 00:06:43.233 ' 00:06:43.233 17:51:25 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.233 --rc genhtml_branch_coverage=1 00:06:43.233 --rc genhtml_function_coverage=1 00:06:43.233 --rc genhtml_legend=1 00:06:43.233 --rc geninfo_all_blocks=1 00:06:43.233 --rc geninfo_unexecuted_blocks=1 00:06:43.233 00:06:43.233 ' 00:06:43.233 17:51:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:43.233 17:51:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59994 00:06:43.233 17:51:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59994 00:06:43.233 17:51:25 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59994 ']' 00:06:43.233 17:51:25 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:43.233 17:51:25 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.233 17:51:25 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.233 17:51:25 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.233 17:51:25 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.233 17:51:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.493 [2024-11-26 17:51:25.144782] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:06:43.493 [2024-11-26 17:51:25.145007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59994 ] 00:06:43.493 [2024-11-26 17:51:25.321840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.752 [2024-11-26 17:51:25.444891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.690 17:51:26 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.690 17:51:26 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:44.690 17:51:26 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:44.949 { 00:06:44.949 "version": "SPDK v25.01-pre git sha1 9f3071c5f", 00:06:44.949 "fields": { 00:06:44.949 "major": 25, 00:06:44.949 "minor": 1, 00:06:44.949 "patch": 0, 00:06:44.949 "suffix": "-pre", 00:06:44.949 "commit": "9f3071c5f" 00:06:44.949 } 00:06:44.949 } 00:06:44.949 17:51:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:44.949 17:51:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:44.949 17:51:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:44.949 17:51:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:44.949 17:51:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:44.949 17:51:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.949 17:51:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.949 17:51:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:44.949 17:51:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:44.949 17:51:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.949 17:51:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:44.949 17:51:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:44.949 17:51:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.949 17:51:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:44.949 17:51:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.949 17:51:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.949 17:51:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.949 17:51:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.949 17:51:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.949 17:51:26 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.949 17:51:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.949 17:51:26 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.949 17:51:26 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:44.949 17:51:26 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.210 request: 00:06:45.210 { 00:06:45.210 "method": "env_dpdk_get_mem_stats", 00:06:45.210 "req_id": 1 00:06:45.210 } 00:06:45.210 Got JSON-RPC error response 00:06:45.210 response: 00:06:45.210 { 00:06:45.210 "code": -32601, 00:06:45.210 "message": "Method not found" 00:06:45.210 } 00:06:45.210 17:51:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:45.210 17:51:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.210 17:51:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:45.210 17:51:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.210 17:51:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59994 00:06:45.210 17:51:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59994 ']' 00:06:45.210 17:51:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59994 00:06:45.210 17:51:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:45.210 17:51:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.210 17:51:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59994 00:06:45.210 17:51:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.210 17:51:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.210 17:51:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59994' 00:06:45.210 killing process with pid 59994 00:06:45.210 17:51:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 59994 00:06:45.210 17:51:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 59994 00:06:47.753 00:06:47.753 real 0m4.586s 00:06:47.753 user 0m4.854s 00:06:47.753 sys 0m0.629s 00:06:47.753 ************************************ 00:06:47.753 END TEST app_cmdline 00:06:47.753 ************************************ 00:06:47.753 17:51:29 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.753 17:51:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.753 17:51:29 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:47.753 17:51:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.753 17:51:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.753 17:51:29 -- common/autotest_common.sh@10 -- # set +x 00:06:47.753 ************************************ 00:06:47.753 START TEST version 00:06:47.753 ************************************ 00:06:47.753 17:51:29 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:47.753 * Looking for test storage... 00:06:47.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:47.753 17:51:29 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.753 17:51:29 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.753 17:51:29 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.014 17:51:29 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.014 17:51:29 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.014 17:51:29 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.014 17:51:29 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.014 17:51:29 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.014 17:51:29 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.014 17:51:29 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.014 17:51:29 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.014 17:51:29 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.014 17:51:29 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.014 17:51:29 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.014 17:51:29 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.014 17:51:29 version -- scripts/common.sh@344 -- # case "$op" in 00:06:48.014 17:51:29 version -- scripts/common.sh@345 -- # : 1 00:06:48.014 17:51:29 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.014 17:51:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.014 17:51:29 version -- scripts/common.sh@365 -- # decimal 1 00:06:48.014 17:51:29 version -- scripts/common.sh@353 -- # local d=1 00:06:48.014 17:51:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.014 17:51:29 version -- scripts/common.sh@355 -- # echo 1 00:06:48.014 17:51:29 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.014 17:51:29 version -- scripts/common.sh@366 -- # decimal 2 00:06:48.014 17:51:29 version -- scripts/common.sh@353 -- # local d=2 00:06:48.014 17:51:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.014 17:51:29 version -- scripts/common.sh@355 -- # echo 2 00:06:48.014 17:51:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.014 17:51:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.014 17:51:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.014 17:51:29 version -- scripts/common.sh@368 -- # return 0 00:06:48.014 17:51:29 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.014 17:51:29 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.014 --rc genhtml_branch_coverage=1 00:06:48.014 --rc genhtml_function_coverage=1 00:06:48.014 --rc genhtml_legend=1 00:06:48.014 --rc geninfo_all_blocks=1 00:06:48.014 --rc geninfo_unexecuted_blocks=1 00:06:48.014 00:06:48.014 ' 00:06:48.014 17:51:29 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.014 --rc genhtml_branch_coverage=1 00:06:48.014 --rc genhtml_function_coverage=1 00:06:48.014 --rc genhtml_legend=1 00:06:48.014 --rc geninfo_all_blocks=1 00:06:48.014 --rc geninfo_unexecuted_blocks=1 00:06:48.014 00:06:48.014 ' 00:06:48.014 17:51:29 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.014 --rc genhtml_branch_coverage=1 00:06:48.014 --rc genhtml_function_coverage=1 00:06:48.014 --rc genhtml_legend=1 00:06:48.014 --rc geninfo_all_blocks=1 00:06:48.014 --rc geninfo_unexecuted_blocks=1 00:06:48.014 00:06:48.014 ' 00:06:48.014 17:51:29 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.014 --rc genhtml_branch_coverage=1 00:06:48.014 --rc genhtml_function_coverage=1 00:06:48.014 --rc genhtml_legend=1 00:06:48.014 --rc geninfo_all_blocks=1 00:06:48.014 --rc geninfo_unexecuted_blocks=1 00:06:48.014 00:06:48.014 ' 00:06:48.014 17:51:29 version -- app/version.sh@17 -- # get_header_version major 00:06:48.014 17:51:29 version -- app/version.sh@14 -- # cut -f2 00:06:48.014 17:51:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:48.014 17:51:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.014 17:51:29 version -- app/version.sh@17 -- # major=25 00:06:48.014 17:51:29 version -- app/version.sh@18 -- # get_header_version minor 00:06:48.014 17:51:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:48.014 17:51:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.014 17:51:29 version -- app/version.sh@14 -- # cut -f2 00:06:48.014 17:51:29 version -- app/version.sh@18 -- # minor=1 00:06:48.014 17:51:29 version -- app/version.sh@19 -- # get_header_version patch 00:06:48.014 17:51:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:48.014 17:51:29 version -- app/version.sh@14 -- # cut -f2 00:06:48.014 17:51:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.014 17:51:29 version -- app/version.sh@19 -- # patch=0 00:06:48.014 17:51:29 version -- app/version.sh@20 -- # get_header_version suffix 00:06:48.014 17:51:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:48.015 17:51:29 version -- app/version.sh@14 -- # cut -f2 00:06:48.015 17:51:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.015 17:51:29 version -- app/version.sh@20 -- # suffix=-pre 00:06:48.015 17:51:29 version -- app/version.sh@22 -- # version=25.1 00:06:48.015 17:51:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:48.015 17:51:29 version -- app/version.sh@28 -- # version=25.1rc0 00:06:48.015 17:51:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:48.015 17:51:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:48.015 17:51:29 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:48.015 17:51:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:48.015 ************************************ 00:06:48.015 END TEST version 00:06:48.015 ************************************ 00:06:48.015 00:06:48.015 real 0m0.329s 00:06:48.015 user 0m0.192s 00:06:48.015 sys 0m0.191s 00:06:48.015 17:51:29 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.015 17:51:29 version -- common/autotest_common.sh@10 -- # set +x 00:06:48.015 17:51:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:48.015 17:51:29 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:48.015 17:51:29 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:48.015 17:51:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.015 17:51:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.015 17:51:29 -- common/autotest_common.sh@10 -- # set +x 00:06:48.015 ************************************ 00:06:48.015 START TEST bdev_raid 00:06:48.015 ************************************ 00:06:48.015 17:51:29 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:48.276 * Looking for test storage... 00:06:48.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:48.276 17:51:29 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.276 17:51:29 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.276 17:51:29 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.276 17:51:30 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.276 17:51:30 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:48.276 17:51:30 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.276 17:51:30 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.276 --rc genhtml_branch_coverage=1 00:06:48.276 --rc genhtml_function_coverage=1 00:06:48.276 --rc genhtml_legend=1 00:06:48.276 --rc geninfo_all_blocks=1 00:06:48.276 --rc geninfo_unexecuted_blocks=1 00:06:48.276 00:06:48.276 ' 00:06:48.276 17:51:30 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.276 --rc genhtml_branch_coverage=1 00:06:48.276 --rc genhtml_function_coverage=1 00:06:48.276 --rc genhtml_legend=1 00:06:48.276 --rc geninfo_all_blocks=1 00:06:48.276 --rc geninfo_unexecuted_blocks=1 00:06:48.276 00:06:48.276 ' 00:06:48.276 17:51:30 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.276 --rc genhtml_branch_coverage=1 00:06:48.276 --rc genhtml_function_coverage=1 00:06:48.276 --rc genhtml_legend=1 00:06:48.276 --rc geninfo_all_blocks=1 00:06:48.276 --rc geninfo_unexecuted_blocks=1 00:06:48.276 00:06:48.276 ' 00:06:48.276 17:51:30 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.276 --rc genhtml_branch_coverage=1 00:06:48.276 --rc genhtml_function_coverage=1 00:06:48.276 --rc genhtml_legend=1 00:06:48.276 --rc geninfo_all_blocks=1 00:06:48.276 --rc geninfo_unexecuted_blocks=1 00:06:48.276 00:06:48.276 ' 00:06:48.277 17:51:30 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:48.277 17:51:30 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:48.277 17:51:30 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:48.277 17:51:30 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:48.277 17:51:30 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:48.277 17:51:30 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:48.277 17:51:30 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:48.277 17:51:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.277 17:51:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.277 17:51:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:48.277 ************************************ 00:06:48.277 START TEST raid1_resize_data_offset_test 00:06:48.277 ************************************ 00:06:48.277 17:51:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:48.277 17:51:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60186 00:06:48.277 17:51:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60186' 00:06:48.277 Process raid pid: 60186 00:06:48.277 17:51:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:48.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.277 17:51:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60186 00:06:48.277 17:51:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60186 ']' 00:06:48.277 17:51:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.277 17:51:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.277 17:51:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.277 17:51:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.277 17:51:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.537 [2024-11-26 17:51:30.219668] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:06:48.537 [2024-11-26 17:51:30.219936] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.797 [2024-11-26 17:51:30.407857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.797 [2024-11-26 17:51:30.534501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.057 [2024-11-26 17:51:30.757955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.057 [2024-11-26 17:51:30.758011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.317 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.317 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:49.317 17:51:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:49.317 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.317 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.317 malloc0 00:06:49.317 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.317 17:51:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:49.317 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.317 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.576 malloc1 00:06:49.576 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.576 17:51:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:49.576 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.576 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.576 null0 00:06:49.576 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.576 17:51:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:49.576 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.577 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.577 [2024-11-26 17:51:31.269709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:49.577 [2024-11-26 17:51:31.272062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:49.577 [2024-11-26 17:51:31.272127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:49.577 [2024-11-26 17:51:31.272310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:49.577 [2024-11-26 17:51:31.272327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:49.577 [2024-11-26 17:51:31.272654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:49.577 [2024-11-26 17:51:31.272841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:49.577 [2024-11-26 17:51:31.272855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:49.577 [2024-11-26 17:51:31.273204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.577 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.577 17:51:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.577 17:51:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:49.577 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.577 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.577 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.577 17:51:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:49.577 17:51:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:49.577 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.577 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.577 [2024-11-26 17:51:31.329647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:49.577 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.577 17:51:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:49.577 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.577 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.146 malloc2 00:06:50.146 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.146 17:51:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:50.146 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.146 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.146 [2024-11-26 17:51:31.924387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:50.146 [2024-11-26 17:51:31.943075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:50.146 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.146 [2024-11-26 17:51:31.945208] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:50.146 17:51:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.146 17:51:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:50.146 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.146 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.146 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.146 17:51:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:50.146 17:51:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60186 00:06:50.146 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60186 ']' 00:06:50.146 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60186 00:06:50.146 17:51:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:50.146 17:51:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.406 17:51:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60186 00:06:50.406 killing process with pid 60186 00:06:50.406 17:51:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.406 17:51:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.406 17:51:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60186' 00:06:50.406 17:51:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60186 00:06:50.406 17:51:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60186 00:06:50.406 [2024-11-26 17:51:32.044546] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:50.406 [2024-11-26 17:51:32.045756] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:50.406 [2024-11-26 17:51:32.045826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.406 [2024-11-26 17:51:32.045845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:50.406 [2024-11-26 17:51:32.087673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.406 [2024-11-26 17:51:32.088159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:50.406 [2024-11-26 17:51:32.088187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:52.314 [2024-11-26 17:51:34.074567] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:53.696 ************************************ 00:06:53.696 END TEST raid1_resize_data_offset_test 00:06:53.696 ************************************ 00:06:53.696 17:51:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:53.696 00:06:53.696 real 0m5.233s 00:06:53.696 user 0m5.138s 00:06:53.696 sys 0m0.576s 00:06:53.696 17:51:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.696 17:51:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.696 17:51:35 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:53.696 17:51:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:53.696 17:51:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.696 17:51:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:53.696 ************************************ 00:06:53.696 START TEST raid0_resize_superblock_test 00:06:53.696 ************************************ 00:06:53.696 Process raid pid: 60277 00:06:53.696 17:51:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:53.696 17:51:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:53.696 17:51:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60277 00:06:53.696 17:51:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60277' 00:06:53.696 17:51:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:53.696 17:51:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60277 00:06:53.696 17:51:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60277 ']' 00:06:53.696 17:51:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.696 17:51:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.696 17:51:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.696 17:51:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.696 17:51:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.696 [2024-11-26 17:51:35.515960] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:06:53.696 [2024-11-26 17:51:35.516257] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.956 [2024-11-26 17:51:35.701071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.216 [2024-11-26 17:51:35.832057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.216 [2024-11-26 17:51:36.056999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.216 [2024-11-26 17:51:36.057152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.785 17:51:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.785 17:51:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:54.785 17:51:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:54.785 17:51:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.785 17:51:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.355 malloc0 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.355 [2024-11-26 17:51:37.008668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:55.355 [2024-11-26 17:51:37.008879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.355 [2024-11-26 17:51:37.008959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:55.355 [2024-11-26 17:51:37.009007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.355 [2024-11-26 17:51:37.011801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.355 [2024-11-26 17:51:37.011941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:55.355 pt0 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.355 8d9d23c9-62eb-40d4-a4c6-8ed9713a72b2 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.355 7fa32372-6014-4b10-8e5b-0e27fae1265b 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.355 d8c7675e-05e8-4dad-92e4-ae38b88668de 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.355 [2024-11-26 17:51:37.142465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7fa32372-6014-4b10-8e5b-0e27fae1265b is claimed 00:06:55.355 [2024-11-26 17:51:37.142611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d8c7675e-05e8-4dad-92e4-ae38b88668de is claimed 00:06:55.355 [2024-11-26 17:51:37.142774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:55.355 [2024-11-26 17:51:37.142791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:55.355 [2024-11-26 17:51:37.143146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:55.355 [2024-11-26 17:51:37.143383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:55.355 [2024-11-26 17:51:37.143406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:55.355 [2024-11-26 17:51:37.143613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.355 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.616 [2024-11-26 17:51:37.254523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.616 [2024-11-26 17:51:37.298542] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.616 [2024-11-26 17:51:37.298603] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7fa32372-6014-4b10-8e5b-0e27fae1265b' was resized: old size 131072, new size 204800 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.616 [2024-11-26 17:51:37.310468] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.616 [2024-11-26 17:51:37.310517] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd8c7675e-05e8-4dad-92e4-ae38b88668de' was resized: old size 131072, new size 204800 00:06:55.616 [2024-11-26 17:51:37.310576] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:55.616 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.617 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:55.617 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:55.617 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.617 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.617 [2024-11-26 17:51:37.426354] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.617 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.617 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:55.617 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:55.617 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:55.617 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:55.617 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.617 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.617 [2024-11-26 17:51:37.473994] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:55.617 [2024-11-26 17:51:37.474240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:55.617 [2024-11-26 17:51:37.474293] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:55.617 [2024-11-26 17:51:37.474338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:55.617 [2024-11-26 17:51:37.474583] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:55.617 [2024-11-26 17:51:37.474688] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:55.617 [2024-11-26 17:51:37.474754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:55.877 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.877 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:55.877 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.877 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.877 [2024-11-26 17:51:37.485817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:55.877 [2024-11-26 17:51:37.485911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.877 [2024-11-26 17:51:37.485938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:55.877 [2024-11-26 17:51:37.485951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.877 [2024-11-26 17:51:37.488474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.877 [2024-11-26 17:51:37.488528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:55.877 [2024-11-26 17:51:37.490306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7fa32372-6014-4b10-8e5b-0e27fae1265b 00:06:55.877 [2024-11-26 17:51:37.490385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7fa32372-6014-4b10-8e5b-0e27fae1265b is claimed 00:06:55.877 [2024-11-26 17:51:37.490487] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d8c7675e-05e8-4dad-92e4-ae38b88668de 00:06:55.877 [2024-11-26 17:51:37.490507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d8c7675e-05e8-4dad-92e4-ae38b88668de is claimed 00:06:55.877 [2024-11-26 17:51:37.490706] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev d8c7675e-05e8-4dad-92e4-ae38b88668de (2) smaller than existing raid bdev Raid (3) 00:06:55.877 [2024-11-26 17:51:37.490734] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 7fa32372-6014-4b10-8e5b-0e27fae1265b: File exists 00:06:55.877 [2024-11-26 17:51:37.490773] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:55.877 [2024-11-26 17:51:37.490784] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:55.877 [2024-11-26 17:51:37.491071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:55.877 pt0 00:06:55.877 [2024-11-26 17:51:37.491226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:55.877 [2024-11-26 17:51:37.491236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:55.877 [2024-11-26 17:51:37.491402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.877 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.877 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:55.877 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.878 [2024-11-26 17:51:37.514633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60277 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60277 ']' 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60277 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60277 00:06:55.878 killing process with pid 60277 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60277' 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60277 00:06:55.878 [2024-11-26 17:51:37.591185] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:55.878 [2024-11-26 17:51:37.591302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:55.878 17:51:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60277 00:06:55.878 [2024-11-26 17:51:37.591362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:55.878 [2024-11-26 17:51:37.591372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:57.785 [2024-11-26 17:51:39.200684] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:58.722 17:51:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:58.722 00:06:58.722 real 0m5.063s 00:06:58.722 user 0m5.311s 00:06:58.722 sys 0m0.628s 00:06:58.722 17:51:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.722 ************************************ 00:06:58.722 END TEST raid0_resize_superblock_test 00:06:58.722 ************************************ 00:06:58.722 17:51:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.722 17:51:40 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:58.722 17:51:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:58.722 17:51:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.722 17:51:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:58.722 ************************************ 00:06:58.722 START TEST raid1_resize_superblock_test 00:06:58.722 ************************************ 00:06:58.722 17:51:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:58.722 17:51:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:58.722 17:51:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60380 00:06:58.722 17:51:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:58.722 17:51:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60380' 00:06:58.722 Process raid pid: 60380 00:06:58.722 17:51:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60380 00:06:58.722 17:51:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60380 ']' 00:06:58.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.722 17:51:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.722 17:51:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.722 17:51:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.722 17:51:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.722 17:51:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.982 [2024-11-26 17:51:40.634404] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:06:58.982 [2024-11-26 17:51:40.634635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.982 [2024-11-26 17:51:40.824694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.240 [2024-11-26 17:51:40.955891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.499 [2024-11-26 17:51:41.187460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.499 [2024-11-26 17:51:41.187520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.758 17:51:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.758 17:51:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:59.758 17:51:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:59.758 17:51:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.758 17:51:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.697 malloc0 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.697 [2024-11-26 17:51:42.208635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:00.697 [2024-11-26 17:51:42.208841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.697 [2024-11-26 17:51:42.208885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:00.697 [2024-11-26 17:51:42.208918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.697 [2024-11-26 17:51:42.211627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.697 [2024-11-26 17:51:42.211689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:00.697 pt0 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.697 5ef4a8e0-f0c7-4b67-9152-511190dc4e39 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.697 7cff57ce-8e8f-4045-8d46-56cf7bf9c7eb 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.697 56c19f20-cf53-4ef1-a750-6fe8ccdd2ec2 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.697 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.697 [2024-11-26 17:51:42.350117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7cff57ce-8e8f-4045-8d46-56cf7bf9c7eb is claimed 00:07:00.697 [2024-11-26 17:51:42.350438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 56c19f20-cf53-4ef1-a750-6fe8ccdd2ec2 is claimed 00:07:00.697 [2024-11-26 17:51:42.350723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:00.697 [2024-11-26 17:51:42.350790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:00.698 [2024-11-26 17:51:42.351237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:00.698 [2024-11-26 17:51:42.351519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:00.698 [2024-11-26 17:51:42.351571] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:00.698 [2024-11-26 17:51:42.351833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.698 [2024-11-26 17:51:42.470278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.698 [2024-11-26 17:51:42.506236] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:00.698 [2024-11-26 17:51:42.506307] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7cff57ce-8e8f-4045-8d46-56cf7bf9c7eb' was resized: old size 131072, new size 204800 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.698 [2024-11-26 17:51:42.518172] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:00.698 [2024-11-26 17:51:42.518235] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '56c19f20-cf53-4ef1-a750-6fe8ccdd2ec2' was resized: old size 131072, new size 204800 00:07:00.698 [2024-11-26 17:51:42.518287] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.698 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.959 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:00.959 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:00.959 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.959 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.959 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:00.959 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.959 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:00.959 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:00.959 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:00.959 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.959 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.959 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:00.959 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:00.959 [2024-11-26 17:51:42.637981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.960 [2024-11-26 17:51:42.685632] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:00.960 [2024-11-26 17:51:42.685937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:00.960 [2024-11-26 17:51:42.685983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:00.960 [2024-11-26 17:51:42.686240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:00.960 [2024-11-26 17:51:42.686567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:00.960 [2024-11-26 17:51:42.686661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:00.960 [2024-11-26 17:51:42.686677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.960 [2024-11-26 17:51:42.697459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:00.960 [2024-11-26 17:51:42.697584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.960 [2024-11-26 17:51:42.697613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:00.960 [2024-11-26 17:51:42.697633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.960 [2024-11-26 17:51:42.700492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.960 [2024-11-26 17:51:42.700701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:00.960 pt0 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.960 [2024-11-26 17:51:42.703173] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7cff57ce-8e8f-4045-8d46-56cf7bf9c7eb 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:00.960 [2024-11-26 17:51:42.703275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7cff57ce-8e8f-4045-8d46-56cf7bf9c7eb is claimed 00:07:00.960 [2024-11-26 17:51:42.703418] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 56c19f20-cf53-4ef1-a750-6fe8ccdd2ec2 00:07:00.960 [2024-11-26 17:51:42.703442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 56c19f20-cf53-4ef1-a750-6fe8ccdd2ec2 is claimed 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.960 [2024-11-26 17:51:42.703586] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 56c19f20-cf53-4ef1-a750-6fe8ccdd2ec2 (2) smaller than existing raid bdev Raid (3) 00:07:00.960 [2024-11-26 17:51:42.703615] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 7cff57ce-8e8f-4045-8d46-56cf7bf9c7eb: File exists 00:07:00.960 [2024-11-26 17:51:42.703670] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:00.960 [2024-11-26 17:51:42.703683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.960 [2024-11-26 17:51:42.703981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:00.960 [2024-11-26 17:51:42.704191] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:00.960 [2024-11-26 17:51:42.704202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:00.960 [2024-11-26 17:51:42.704387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.960 [2024-11-26 17:51:42.725725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60380 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60380 ']' 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60380 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60380 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60380' 00:07:00.960 killing process with pid 60380 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60380 00:07:00.960 [2024-11-26 17:51:42.816491] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:00.960 17:51:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60380 00:07:00.960 [2024-11-26 17:51:42.816754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:00.960 [2024-11-26 17:51:42.816856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:00.960 [2024-11-26 17:51:42.816941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:02.869 [2024-11-26 17:51:44.502461] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.311 17:51:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:04.311 00:07:04.311 real 0m5.288s 00:07:04.311 user 0m5.572s 00:07:04.311 sys 0m0.652s 00:07:04.311 17:51:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.311 17:51:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.311 ************************************ 00:07:04.311 END TEST raid1_resize_superblock_test 00:07:04.311 ************************************ 00:07:04.311 17:51:45 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:04.311 17:51:45 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:04.311 17:51:45 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:04.311 17:51:45 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:04.311 17:51:45 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:04.311 17:51:45 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:04.311 17:51:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.311 17:51:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.311 17:51:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.311 ************************************ 00:07:04.311 START TEST raid_function_test_raid0 00:07:04.311 ************************************ 00:07:04.311 17:51:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:04.311 17:51:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:04.311 17:51:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:04.311 17:51:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:04.311 17:51:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60489 00:07:04.311 17:51:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:04.311 17:51:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60489' 00:07:04.311 Process raid pid: 60489 00:07:04.311 17:51:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60489 00:07:04.311 17:51:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60489 ']' 00:07:04.311 17:51:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.311 17:51:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.311 17:51:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.311 17:51:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.311 17:51:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:04.311 [2024-11-26 17:51:46.021884] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:07:04.311 [2024-11-26 17:51:46.022149] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.570 [2024-11-26 17:51:46.204331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.570 [2024-11-26 17:51:46.342221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.830 [2024-11-26 17:51:46.579965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.830 [2024-11-26 17:51:46.580130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.399 17:51:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.399 17:51:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:05.399 17:51:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:05.399 17:51:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.399 17:51:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:05.399 Base_1 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:05.399 Base_2 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:05.399 [2024-11-26 17:51:47.075013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:05.399 [2024-11-26 17:51:47.077198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:05.399 [2024-11-26 17:51:47.077297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:05.399 [2024-11-26 17:51:47.077312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:05.399 [2024-11-26 17:51:47.077652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:05.399 [2024-11-26 17:51:47.077831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:05.399 [2024-11-26 17:51:47.077842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:05.399 [2024-11-26 17:51:47.078079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:05.399 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:05.660 [2024-11-26 17:51:47.366580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:05.660 /dev/nbd0 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.660 1+0 records in 00:07:05.660 1+0 records out 00:07:05.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734878 s, 5.6 MB/s 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:05.660 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:05.919 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.919 { 00:07:05.919 "nbd_device": "/dev/nbd0", 00:07:05.919 "bdev_name": "raid" 00:07:05.919 } 00:07:05.919 ]' 00:07:05.919 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.919 { 00:07:05.919 "nbd_device": "/dev/nbd0", 00:07:05.919 "bdev_name": "raid" 00:07:05.919 } 00:07:05.919 ]' 00:07:05.919 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.920 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:05.920 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:05.920 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.920 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:05.920 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:05.920 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:05.920 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:05.920 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:05.920 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:05.920 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:05.920 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:05.920 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:05.920 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:05.920 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:06.179 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:06.179 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:06.179 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:06.179 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:06.179 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:06.179 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:06.179 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:06.179 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:06.179 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:06.179 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:06.179 4096+0 records in 00:07:06.179 4096+0 records out 00:07:06.179 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0355908 s, 58.9 MB/s 00:07:06.179 17:51:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:06.439 4096+0 records in 00:07:06.439 4096+0 records out 00:07:06.439 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.230888 s, 9.1 MB/s 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:06.439 128+0 records in 00:07:06.439 128+0 records out 00:07:06.439 65536 bytes (66 kB, 64 KiB) copied, 0.00126839 s, 51.7 MB/s 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:06.439 2035+0 records in 00:07:06.439 2035+0 records out 00:07:06.439 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0156775 s, 66.5 MB/s 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:06.439 456+0 records in 00:07:06.439 456+0 records out 00:07:06.439 233472 bytes (233 kB, 228 KiB) copied, 0.00375455 s, 62.2 MB/s 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.439 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:06.699 [2024-11-26 17:51:48.465358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.699 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.699 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.699 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.699 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.699 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.699 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.699 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:06.699 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.699 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:06.699 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:06.699 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60489 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60489 ']' 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60489 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.958 17:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60489 00:07:07.228 17:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.228 17:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.228 killing process with pid 60489 00:07:07.228 17:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60489' 00:07:07.228 17:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60489 00:07:07.228 [2024-11-26 17:51:48.832644] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:07.228 [2024-11-26 17:51:48.832777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.228 17:51:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60489 00:07:07.228 [2024-11-26 17:51:48.832834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.228 [2024-11-26 17:51:48.832853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:07.498 [2024-11-26 17:51:49.085787] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.911 ************************************ 00:07:08.911 END TEST raid_function_test_raid0 00:07:08.911 ************************************ 00:07:08.911 17:51:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:08.911 00:07:08.911 real 0m4.498s 00:07:08.911 user 0m5.332s 00:07:08.911 sys 0m1.067s 00:07:08.911 17:51:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.911 17:51:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:08.911 17:51:50 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:08.911 17:51:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:08.911 17:51:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.911 17:51:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.911 ************************************ 00:07:08.911 START TEST raid_function_test_concat 00:07:08.911 ************************************ 00:07:08.911 17:51:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:08.912 17:51:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:08.912 17:51:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:08.912 17:51:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:08.912 Process raid pid: 60618 00:07:08.912 17:51:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60618 00:07:08.912 17:51:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:08.912 17:51:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60618' 00:07:08.912 17:51:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60618 00:07:08.912 17:51:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60618 ']' 00:07:08.912 17:51:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.912 17:51:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.912 17:51:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.912 17:51:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.912 17:51:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:08.912 [2024-11-26 17:51:50.588156] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:07:08.912 [2024-11-26 17:51:50.588398] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.912 [2024-11-26 17:51:50.772240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.172 [2024-11-26 17:51:50.912271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.431 [2024-11-26 17:51:51.151378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.431 [2024-11-26 17:51:51.151526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.690 17:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.690 17:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:09.690 17:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:09.690 17:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.690 17:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:09.949 Base_1 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:09.949 Base_2 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:09.949 [2024-11-26 17:51:51.644730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:09.949 [2024-11-26 17:51:51.647454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:09.949 [2024-11-26 17:51:51.647593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:09.949 [2024-11-26 17:51:51.647608] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:09.949 [2024-11-26 17:51:51.647969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:09.949 [2024-11-26 17:51:51.648179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:09.949 [2024-11-26 17:51:51.648195] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:09.949 [2024-11-26 17:51:51.648412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:09.949 17:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:09.950 17:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:09.950 17:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:09.950 17:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:09.950 17:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:09.950 17:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:09.950 17:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:09.950 17:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:10.210 [2024-11-26 17:51:51.952356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:10.210 /dev/nbd0 00:07:10.210 17:51:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.210 1+0 records in 00:07:10.210 1+0 records out 00:07:10.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442132 s, 9.3 MB/s 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.210 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:10.470 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.470 { 00:07:10.470 "nbd_device": "/dev/nbd0", 00:07:10.470 "bdev_name": "raid" 00:07:10.470 } 00:07:10.470 ]' 00:07:10.470 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.470 { 00:07:10.470 "nbd_device": "/dev/nbd0", 00:07:10.470 "bdev_name": "raid" 00:07:10.470 } 00:07:10.470 ]' 00:07:10.470 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:10.729 4096+0 records in 00:07:10.729 4096+0 records out 00:07:10.729 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0312695 s, 67.1 MB/s 00:07:10.729 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:10.989 4096+0 records in 00:07:10.989 4096+0 records out 00:07:10.989 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.251004 s, 8.4 MB/s 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:10.989 128+0 records in 00:07:10.989 128+0 records out 00:07:10.989 65536 bytes (66 kB, 64 KiB) copied, 0.00120074 s, 54.6 MB/s 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:10.989 2035+0 records in 00:07:10.989 2035+0 records out 00:07:10.989 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0128997 s, 80.8 MB/s 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:10.989 456+0 records in 00:07:10.989 456+0 records out 00:07:10.989 233472 bytes (233 kB, 228 KiB) copied, 0.00388734 s, 60.1 MB/s 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.989 17:51:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:11.256 [2024-11-26 17:51:53.053433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.256 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:11.256 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:11.256 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:11.256 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.256 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.256 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:11.256 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:11.256 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.256 17:51:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:11.256 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:11.256 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:11.516 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.516 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.516 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60618 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60618 ']' 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60618 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60618 00:07:11.776 killing process with pid 60618 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60618' 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60618 00:07:11.776 [2024-11-26 17:51:53.452794] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.776 [2024-11-26 17:51:53.452958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.776 17:51:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60618 00:07:11.776 [2024-11-26 17:51:53.453037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.776 [2024-11-26 17:51:53.453054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:12.075 [2024-11-26 17:51:53.699221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.468 17:51:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:13.468 ************************************ 00:07:13.468 END TEST raid_function_test_concat 00:07:13.468 ************************************ 00:07:13.468 00:07:13.468 real 0m4.553s 00:07:13.468 user 0m5.348s 00:07:13.468 sys 0m1.129s 00:07:13.468 17:51:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.468 17:51:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:13.468 17:51:55 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:13.468 17:51:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:13.468 17:51:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.468 17:51:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.468 ************************************ 00:07:13.468 START TEST raid0_resize_test 00:07:13.468 ************************************ 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60758 00:07:13.468 Process raid pid: 60758 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60758' 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60758 00:07:13.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60758 ']' 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.468 17:51:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.468 [2024-11-26 17:51:55.235153] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:07:13.468 [2024-11-26 17:51:55.235326] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.727 [2024-11-26 17:51:55.424430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.727 [2024-11-26 17:51:55.559441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.986 [2024-11-26 17:51:55.797384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.986 [2024-11-26 17:51:55.797455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.555 Base_1 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.555 Base_2 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.555 [2024-11-26 17:51:56.189253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:14.555 [2024-11-26 17:51:56.191596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:14.555 [2024-11-26 17:51:56.191686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:14.555 [2024-11-26 17:51:56.191699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:14.555 [2024-11-26 17:51:56.192026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:14.555 [2024-11-26 17:51:56.192186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:14.555 [2024-11-26 17:51:56.192197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:14.555 [2024-11-26 17:51:56.192381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.555 [2024-11-26 17:51:56.201210] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:14.555 [2024-11-26 17:51:56.201342] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:14.555 true 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.555 [2024-11-26 17:51:56.217436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.555 [2024-11-26 17:51:56.265173] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:14.555 [2024-11-26 17:51:56.265228] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:14.555 [2024-11-26 17:51:56.265274] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:14.555 true 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:14.555 [2024-11-26 17:51:56.281366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60758 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60758 ']' 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60758 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60758 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60758' 00:07:14.555 killing process with pid 60758 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60758 00:07:14.555 [2024-11-26 17:51:56.368122] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.555 17:51:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60758 00:07:14.555 [2024-11-26 17:51:56.368348] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.555 [2024-11-26 17:51:56.368409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.555 [2024-11-26 17:51:56.368419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:14.555 [2024-11-26 17:51:56.387450] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.934 17:51:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:15.934 00:07:15.934 real 0m2.429s 00:07:15.934 user 0m2.628s 00:07:15.934 sys 0m0.389s 00:07:15.934 17:51:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.934 17:51:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.934 ************************************ 00:07:15.934 END TEST raid0_resize_test 00:07:15.934 ************************************ 00:07:15.934 17:51:57 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:15.934 17:51:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.934 17:51:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.934 17:51:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.934 ************************************ 00:07:15.934 START TEST raid1_resize_test 00:07:15.934 ************************************ 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60814 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60814' 00:07:15.934 Process raid pid: 60814 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60814 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60814 ']' 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.934 17:51:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.934 [2024-11-26 17:51:57.774933] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:07:15.934 [2024-11-26 17:51:57.775225] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.194 [2024-11-26 17:51:57.957965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.455 [2024-11-26 17:51:58.083779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.455 [2024-11-26 17:51:58.293380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.455 [2024-11-26 17:51:58.293531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.764 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.764 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:16.764 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:16.764 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.764 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.764 Base_1 00:07:16.764 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.764 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:16.764 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.764 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.764 Base_2 00:07:16.764 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.764 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:16.764 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:16.764 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.764 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.764 [2024-11-26 17:51:58.600054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:17.029 [2024-11-26 17:51:58.602227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:17.029 [2024-11-26 17:51:58.602414] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:17.029 [2024-11-26 17:51:58.602437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:17.029 [2024-11-26 17:51:58.602805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:17.029 [2024-11-26 17:51:58.602974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:17.029 [2024-11-26 17:51:58.602985] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:17.029 [2024-11-26 17:51:58.603218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.029 [2024-11-26 17:51:58.612003] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:17.029 [2024-11-26 17:51:58.612054] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:17.029 true 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.029 [2024-11-26 17:51:58.628222] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.029 [2024-11-26 17:51:58.675924] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:17.029 [2024-11-26 17:51:58.675970] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:17.029 [2024-11-26 17:51:58.676009] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:17.029 true 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.029 [2024-11-26 17:51:58.692124] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60814 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60814 ']' 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60814 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60814 00:07:17.029 killing process with pid 60814 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60814' 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60814 00:07:17.029 [2024-11-26 17:51:58.773701] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.029 [2024-11-26 17:51:58.773811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.029 17:51:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60814 00:07:17.029 [2024-11-26 17:51:58.774335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.029 [2024-11-26 17:51:58.774366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:17.029 [2024-11-26 17:51:58.792840] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.409 17:51:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:18.409 00:07:18.409 real 0m2.359s 00:07:18.409 user 0m2.481s 00:07:18.409 sys 0m0.387s 00:07:18.409 17:51:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.409 ************************************ 00:07:18.409 END TEST raid1_resize_test 00:07:18.409 ************************************ 00:07:18.409 17:51:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.409 17:52:00 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:18.409 17:52:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:18.409 17:52:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:18.409 17:52:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:18.409 17:52:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.409 17:52:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.409 ************************************ 00:07:18.409 START TEST raid_state_function_test 00:07:18.409 ************************************ 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:18.409 Process raid pid: 60871 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60871 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60871' 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60871 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60871 ']' 00:07:18.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.409 17:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.409 [2024-11-26 17:52:00.166197] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:07:18.409 [2024-11-26 17:52:00.166858] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.669 [2024-11-26 17:52:00.348795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.669 [2024-11-26 17:52:00.465771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.930 [2024-11-26 17:52:00.672340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.930 [2024-11-26 17:52:00.672390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.190 [2024-11-26 17:52:01.018537] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.190 [2024-11-26 17:52:01.018595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.190 [2024-11-26 17:52:01.018606] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.190 [2024-11-26 17:52:01.018615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.190 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.449 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.449 "name": "Existed_Raid", 00:07:19.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.449 "strip_size_kb": 64, 00:07:19.449 "state": "configuring", 00:07:19.449 "raid_level": "raid0", 00:07:19.449 "superblock": false, 00:07:19.449 "num_base_bdevs": 2, 00:07:19.449 "num_base_bdevs_discovered": 0, 00:07:19.449 "num_base_bdevs_operational": 2, 00:07:19.449 "base_bdevs_list": [ 00:07:19.449 { 00:07:19.449 "name": "BaseBdev1", 00:07:19.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.449 "is_configured": false, 00:07:19.449 "data_offset": 0, 00:07:19.449 "data_size": 0 00:07:19.449 }, 00:07:19.449 { 00:07:19.449 "name": "BaseBdev2", 00:07:19.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.449 "is_configured": false, 00:07:19.449 "data_offset": 0, 00:07:19.449 "data_size": 0 00:07:19.449 } 00:07:19.449 ] 00:07:19.449 }' 00:07:19.449 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.449 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.709 [2024-11-26 17:52:01.457749] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:19.709 [2024-11-26 17:52:01.457860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.709 [2024-11-26 17:52:01.469707] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.709 [2024-11-26 17:52:01.469801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.709 [2024-11-26 17:52:01.469839] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.709 [2024-11-26 17:52:01.469876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.709 [2024-11-26 17:52:01.519219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:19.709 BaseBdev1 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.709 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.709 [ 00:07:19.709 { 00:07:19.709 "name": "BaseBdev1", 00:07:19.709 "aliases": [ 00:07:19.709 "e2671db2-5d07-41d5-8f1a-94ba4e15ff60" 00:07:19.709 ], 00:07:19.709 "product_name": "Malloc disk", 00:07:19.709 "block_size": 512, 00:07:19.709 "num_blocks": 65536, 00:07:19.709 "uuid": "e2671db2-5d07-41d5-8f1a-94ba4e15ff60", 00:07:19.709 "assigned_rate_limits": { 00:07:19.709 "rw_ios_per_sec": 0, 00:07:19.709 "rw_mbytes_per_sec": 0, 00:07:19.709 "r_mbytes_per_sec": 0, 00:07:19.709 "w_mbytes_per_sec": 0 00:07:19.709 }, 00:07:19.709 "claimed": true, 00:07:19.709 "claim_type": "exclusive_write", 00:07:19.709 "zoned": false, 00:07:19.709 "supported_io_types": { 00:07:19.709 "read": true, 00:07:19.709 "write": true, 00:07:19.709 "unmap": true, 00:07:19.709 "flush": true, 00:07:19.709 "reset": true, 00:07:19.709 "nvme_admin": false, 00:07:19.709 "nvme_io": false, 00:07:19.709 "nvme_io_md": false, 00:07:19.709 "write_zeroes": true, 00:07:19.709 "zcopy": true, 00:07:19.709 "get_zone_info": false, 00:07:19.709 "zone_management": false, 00:07:19.709 "zone_append": false, 00:07:19.709 "compare": false, 00:07:19.709 "compare_and_write": false, 00:07:19.709 "abort": true, 00:07:19.709 "seek_hole": false, 00:07:19.709 "seek_data": false, 00:07:19.709 "copy": true, 00:07:19.709 "nvme_iov_md": false 00:07:19.709 }, 00:07:19.709 "memory_domains": [ 00:07:19.709 { 00:07:19.709 "dma_device_id": "system", 00:07:19.709 "dma_device_type": 1 00:07:19.709 }, 00:07:19.709 { 00:07:19.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.709 "dma_device_type": 2 00:07:19.709 } 00:07:19.710 ], 00:07:19.710 "driver_specific": {} 00:07:19.710 } 00:07:19.710 ] 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.710 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.970 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.970 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.970 "name": "Existed_Raid", 00:07:19.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.970 "strip_size_kb": 64, 00:07:19.970 "state": "configuring", 00:07:19.970 "raid_level": "raid0", 00:07:19.970 "superblock": false, 00:07:19.970 "num_base_bdevs": 2, 00:07:19.970 "num_base_bdevs_discovered": 1, 00:07:19.970 "num_base_bdevs_operational": 2, 00:07:19.970 "base_bdevs_list": [ 00:07:19.970 { 00:07:19.970 "name": "BaseBdev1", 00:07:19.970 "uuid": "e2671db2-5d07-41d5-8f1a-94ba4e15ff60", 00:07:19.970 "is_configured": true, 00:07:19.970 "data_offset": 0, 00:07:19.970 "data_size": 65536 00:07:19.970 }, 00:07:19.970 { 00:07:19.970 "name": "BaseBdev2", 00:07:19.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.970 "is_configured": false, 00:07:19.970 "data_offset": 0, 00:07:19.970 "data_size": 0 00:07:19.970 } 00:07:19.970 ] 00:07:19.970 }' 00:07:19.970 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.970 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.230 [2024-11-26 17:52:01.926577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:20.230 [2024-11-26 17:52:01.926636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.230 [2024-11-26 17:52:01.938599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.230 [2024-11-26 17:52:01.940444] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.230 [2024-11-26 17:52:01.940551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.230 "name": "Existed_Raid", 00:07:20.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.230 "strip_size_kb": 64, 00:07:20.230 "state": "configuring", 00:07:20.230 "raid_level": "raid0", 00:07:20.230 "superblock": false, 00:07:20.230 "num_base_bdevs": 2, 00:07:20.230 "num_base_bdevs_discovered": 1, 00:07:20.230 "num_base_bdevs_operational": 2, 00:07:20.230 "base_bdevs_list": [ 00:07:20.230 { 00:07:20.230 "name": "BaseBdev1", 00:07:20.230 "uuid": "e2671db2-5d07-41d5-8f1a-94ba4e15ff60", 00:07:20.230 "is_configured": true, 00:07:20.230 "data_offset": 0, 00:07:20.230 "data_size": 65536 00:07:20.230 }, 00:07:20.230 { 00:07:20.230 "name": "BaseBdev2", 00:07:20.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.230 "is_configured": false, 00:07:20.230 "data_offset": 0, 00:07:20.230 "data_size": 0 00:07:20.230 } 00:07:20.230 ] 00:07:20.230 }' 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.230 17:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.799 [2024-11-26 17:52:02.400177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:20.799 [2024-11-26 17:52:02.400307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:20.799 [2024-11-26 17:52:02.400351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:20.799 [2024-11-26 17:52:02.400677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:20.799 [2024-11-26 17:52:02.400940] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:20.799 [2024-11-26 17:52:02.400997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:20.799 [2024-11-26 17:52:02.401370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.799 BaseBdev2 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.799 [ 00:07:20.799 { 00:07:20.799 "name": "BaseBdev2", 00:07:20.799 "aliases": [ 00:07:20.799 "a076849c-a580-4922-ac47-c8562b1a0c6a" 00:07:20.799 ], 00:07:20.799 "product_name": "Malloc disk", 00:07:20.799 "block_size": 512, 00:07:20.799 "num_blocks": 65536, 00:07:20.799 "uuid": "a076849c-a580-4922-ac47-c8562b1a0c6a", 00:07:20.799 "assigned_rate_limits": { 00:07:20.799 "rw_ios_per_sec": 0, 00:07:20.799 "rw_mbytes_per_sec": 0, 00:07:20.799 "r_mbytes_per_sec": 0, 00:07:20.799 "w_mbytes_per_sec": 0 00:07:20.799 }, 00:07:20.799 "claimed": true, 00:07:20.799 "claim_type": "exclusive_write", 00:07:20.799 "zoned": false, 00:07:20.799 "supported_io_types": { 00:07:20.799 "read": true, 00:07:20.799 "write": true, 00:07:20.799 "unmap": true, 00:07:20.799 "flush": true, 00:07:20.799 "reset": true, 00:07:20.799 "nvme_admin": false, 00:07:20.799 "nvme_io": false, 00:07:20.799 "nvme_io_md": false, 00:07:20.799 "write_zeroes": true, 00:07:20.799 "zcopy": true, 00:07:20.799 "get_zone_info": false, 00:07:20.799 "zone_management": false, 00:07:20.799 "zone_append": false, 00:07:20.799 "compare": false, 00:07:20.799 "compare_and_write": false, 00:07:20.799 "abort": true, 00:07:20.799 "seek_hole": false, 00:07:20.799 "seek_data": false, 00:07:20.799 "copy": true, 00:07:20.799 "nvme_iov_md": false 00:07:20.799 }, 00:07:20.799 "memory_domains": [ 00:07:20.799 { 00:07:20.799 "dma_device_id": "system", 00:07:20.799 "dma_device_type": 1 00:07:20.799 }, 00:07:20.799 { 00:07:20.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.799 "dma_device_type": 2 00:07:20.799 } 00:07:20.799 ], 00:07:20.799 "driver_specific": {} 00:07:20.799 } 00:07:20.799 ] 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.799 "name": "Existed_Raid", 00:07:20.799 "uuid": "477bdcd5-64ee-43a0-a32d-f73a78769298", 00:07:20.799 "strip_size_kb": 64, 00:07:20.799 "state": "online", 00:07:20.799 "raid_level": "raid0", 00:07:20.799 "superblock": false, 00:07:20.799 "num_base_bdevs": 2, 00:07:20.799 "num_base_bdevs_discovered": 2, 00:07:20.799 "num_base_bdevs_operational": 2, 00:07:20.799 "base_bdevs_list": [ 00:07:20.799 { 00:07:20.799 "name": "BaseBdev1", 00:07:20.799 "uuid": "e2671db2-5d07-41d5-8f1a-94ba4e15ff60", 00:07:20.799 "is_configured": true, 00:07:20.799 "data_offset": 0, 00:07:20.799 "data_size": 65536 00:07:20.799 }, 00:07:20.799 { 00:07:20.799 "name": "BaseBdev2", 00:07:20.799 "uuid": "a076849c-a580-4922-ac47-c8562b1a0c6a", 00:07:20.799 "is_configured": true, 00:07:20.799 "data_offset": 0, 00:07:20.799 "data_size": 65536 00:07:20.799 } 00:07:20.799 ] 00:07:20.799 }' 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.799 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.090 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:21.090 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:21.090 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:21.090 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:21.090 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:21.090 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:21.090 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:21.090 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.090 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.090 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:21.090 [2024-11-26 17:52:02.851726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.090 17:52:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.090 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.090 "name": "Existed_Raid", 00:07:21.090 "aliases": [ 00:07:21.090 "477bdcd5-64ee-43a0-a32d-f73a78769298" 00:07:21.091 ], 00:07:21.091 "product_name": "Raid Volume", 00:07:21.091 "block_size": 512, 00:07:21.091 "num_blocks": 131072, 00:07:21.091 "uuid": "477bdcd5-64ee-43a0-a32d-f73a78769298", 00:07:21.091 "assigned_rate_limits": { 00:07:21.091 "rw_ios_per_sec": 0, 00:07:21.091 "rw_mbytes_per_sec": 0, 00:07:21.091 "r_mbytes_per_sec": 0, 00:07:21.091 "w_mbytes_per_sec": 0 00:07:21.091 }, 00:07:21.091 "claimed": false, 00:07:21.091 "zoned": false, 00:07:21.091 "supported_io_types": { 00:07:21.091 "read": true, 00:07:21.091 "write": true, 00:07:21.091 "unmap": true, 00:07:21.091 "flush": true, 00:07:21.091 "reset": true, 00:07:21.091 "nvme_admin": false, 00:07:21.091 "nvme_io": false, 00:07:21.091 "nvme_io_md": false, 00:07:21.091 "write_zeroes": true, 00:07:21.091 "zcopy": false, 00:07:21.091 "get_zone_info": false, 00:07:21.091 "zone_management": false, 00:07:21.091 "zone_append": false, 00:07:21.091 "compare": false, 00:07:21.091 "compare_and_write": false, 00:07:21.091 "abort": false, 00:07:21.091 "seek_hole": false, 00:07:21.091 "seek_data": false, 00:07:21.091 "copy": false, 00:07:21.091 "nvme_iov_md": false 00:07:21.091 }, 00:07:21.091 "memory_domains": [ 00:07:21.091 { 00:07:21.091 "dma_device_id": "system", 00:07:21.091 "dma_device_type": 1 00:07:21.091 }, 00:07:21.091 { 00:07:21.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.091 "dma_device_type": 2 00:07:21.091 }, 00:07:21.091 { 00:07:21.091 "dma_device_id": "system", 00:07:21.091 "dma_device_type": 1 00:07:21.091 }, 00:07:21.091 { 00:07:21.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.091 "dma_device_type": 2 00:07:21.091 } 00:07:21.091 ], 00:07:21.091 "driver_specific": { 00:07:21.091 "raid": { 00:07:21.091 "uuid": "477bdcd5-64ee-43a0-a32d-f73a78769298", 00:07:21.091 "strip_size_kb": 64, 00:07:21.091 "state": "online", 00:07:21.091 "raid_level": "raid0", 00:07:21.091 "superblock": false, 00:07:21.091 "num_base_bdevs": 2, 00:07:21.091 "num_base_bdevs_discovered": 2, 00:07:21.091 "num_base_bdevs_operational": 2, 00:07:21.091 "base_bdevs_list": [ 00:07:21.091 { 00:07:21.091 "name": "BaseBdev1", 00:07:21.091 "uuid": "e2671db2-5d07-41d5-8f1a-94ba4e15ff60", 00:07:21.091 "is_configured": true, 00:07:21.091 "data_offset": 0, 00:07:21.091 "data_size": 65536 00:07:21.091 }, 00:07:21.091 { 00:07:21.091 "name": "BaseBdev2", 00:07:21.091 "uuid": "a076849c-a580-4922-ac47-c8562b1a0c6a", 00:07:21.091 "is_configured": true, 00:07:21.091 "data_offset": 0, 00:07:21.091 "data_size": 65536 00:07:21.091 } 00:07:21.091 ] 00:07:21.091 } 00:07:21.091 } 00:07:21.091 }' 00:07:21.091 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:21.348 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:21.348 BaseBdev2' 00:07:21.348 17:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.348 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.348 [2024-11-26 17:52:03.115080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:21.348 [2024-11-26 17:52:03.115116] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.348 [2024-11-26 17:52:03.115170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.607 "name": "Existed_Raid", 00:07:21.607 "uuid": "477bdcd5-64ee-43a0-a32d-f73a78769298", 00:07:21.607 "strip_size_kb": 64, 00:07:21.607 "state": "offline", 00:07:21.607 "raid_level": "raid0", 00:07:21.607 "superblock": false, 00:07:21.607 "num_base_bdevs": 2, 00:07:21.607 "num_base_bdevs_discovered": 1, 00:07:21.607 "num_base_bdevs_operational": 1, 00:07:21.607 "base_bdevs_list": [ 00:07:21.607 { 00:07:21.607 "name": null, 00:07:21.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.607 "is_configured": false, 00:07:21.607 "data_offset": 0, 00:07:21.607 "data_size": 65536 00:07:21.607 }, 00:07:21.607 { 00:07:21.607 "name": "BaseBdev2", 00:07:21.607 "uuid": "a076849c-a580-4922-ac47-c8562b1a0c6a", 00:07:21.607 "is_configured": true, 00:07:21.607 "data_offset": 0, 00:07:21.607 "data_size": 65536 00:07:21.607 } 00:07:21.607 ] 00:07:21.607 }' 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.607 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.866 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:21.866 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:21.866 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:21.866 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.866 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.866 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.866 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.866 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:21.866 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:21.866 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:21.866 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.866 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.866 [2024-11-26 17:52:03.668050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:21.866 [2024-11-26 17:52:03.668110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60871 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60871 ']' 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60871 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60871 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.126 killing process with pid 60871 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60871' 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60871 00:07:22.126 [2024-11-26 17:52:03.844952] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.126 17:52:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60871 00:07:22.126 [2024-11-26 17:52:03.863585] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.505 17:52:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:23.505 00:07:23.505 real 0m4.978s 00:07:23.505 user 0m7.064s 00:07:23.505 sys 0m0.824s 00:07:23.505 17:52:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.505 17:52:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.505 ************************************ 00:07:23.505 END TEST raid_state_function_test 00:07:23.505 ************************************ 00:07:23.505 17:52:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:23.505 17:52:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:23.505 17:52:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.505 17:52:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.505 ************************************ 00:07:23.505 START TEST raid_state_function_test_sb 00:07:23.505 ************************************ 00:07:23.505 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:23.505 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:23.505 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:23.505 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:23.505 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:23.505 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:23.505 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.505 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61124 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:23.506 Process raid pid: 61124 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61124' 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61124 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61124 ']' 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.506 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.506 [2024-11-26 17:52:05.210732] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:07:23.506 [2024-11-26 17:52:05.210850] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.765 [2024-11-26 17:52:05.391363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.765 [2024-11-26 17:52:05.518522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.024 [2024-11-26 17:52:05.742429] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.024 [2024-11-26 17:52:05.742478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.284 [2024-11-26 17:52:06.088248] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:24.284 [2024-11-26 17:52:06.088302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:24.284 [2024-11-26 17:52:06.088313] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.284 [2024-11-26 17:52:06.088324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.284 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.285 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.285 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.285 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.285 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.544 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.544 "name": "Existed_Raid", 00:07:24.544 "uuid": "eba63902-5f89-4c0c-989f-389387c95768", 00:07:24.544 "strip_size_kb": 64, 00:07:24.544 "state": "configuring", 00:07:24.544 "raid_level": "raid0", 00:07:24.544 "superblock": true, 00:07:24.544 "num_base_bdevs": 2, 00:07:24.544 "num_base_bdevs_discovered": 0, 00:07:24.544 "num_base_bdevs_operational": 2, 00:07:24.544 "base_bdevs_list": [ 00:07:24.544 { 00:07:24.544 "name": "BaseBdev1", 00:07:24.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.544 "is_configured": false, 00:07:24.544 "data_offset": 0, 00:07:24.544 "data_size": 0 00:07:24.544 }, 00:07:24.544 { 00:07:24.544 "name": "BaseBdev2", 00:07:24.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.544 "is_configured": false, 00:07:24.544 "data_offset": 0, 00:07:24.544 "data_size": 0 00:07:24.544 } 00:07:24.544 ] 00:07:24.544 }' 00:07:24.544 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.544 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.804 [2024-11-26 17:52:06.583299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:24.804 [2024-11-26 17:52:06.583346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.804 [2024-11-26 17:52:06.595284] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:24.804 [2024-11-26 17:52:06.595331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:24.804 [2024-11-26 17:52:06.595342] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.804 [2024-11-26 17:52:06.595356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.804 [2024-11-26 17:52:06.649741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:24.804 BaseBdev1 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.804 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.102 [ 00:07:25.102 { 00:07:25.102 "name": "BaseBdev1", 00:07:25.102 "aliases": [ 00:07:25.102 "f2135d95-6ac7-4e3b-a32a-6eb55679291a" 00:07:25.102 ], 00:07:25.102 "product_name": "Malloc disk", 00:07:25.102 "block_size": 512, 00:07:25.102 "num_blocks": 65536, 00:07:25.102 "uuid": "f2135d95-6ac7-4e3b-a32a-6eb55679291a", 00:07:25.102 "assigned_rate_limits": { 00:07:25.102 "rw_ios_per_sec": 0, 00:07:25.102 "rw_mbytes_per_sec": 0, 00:07:25.102 "r_mbytes_per_sec": 0, 00:07:25.102 "w_mbytes_per_sec": 0 00:07:25.102 }, 00:07:25.102 "claimed": true, 00:07:25.102 "claim_type": "exclusive_write", 00:07:25.102 "zoned": false, 00:07:25.102 "supported_io_types": { 00:07:25.102 "read": true, 00:07:25.102 "write": true, 00:07:25.102 "unmap": true, 00:07:25.102 "flush": true, 00:07:25.102 "reset": true, 00:07:25.102 "nvme_admin": false, 00:07:25.102 "nvme_io": false, 00:07:25.102 "nvme_io_md": false, 00:07:25.102 "write_zeroes": true, 00:07:25.102 "zcopy": true, 00:07:25.102 "get_zone_info": false, 00:07:25.102 "zone_management": false, 00:07:25.102 "zone_append": false, 00:07:25.102 "compare": false, 00:07:25.102 "compare_and_write": false, 00:07:25.102 "abort": true, 00:07:25.102 "seek_hole": false, 00:07:25.102 "seek_data": false, 00:07:25.102 "copy": true, 00:07:25.102 "nvme_iov_md": false 00:07:25.102 }, 00:07:25.102 "memory_domains": [ 00:07:25.102 { 00:07:25.102 "dma_device_id": "system", 00:07:25.102 "dma_device_type": 1 00:07:25.102 }, 00:07:25.102 { 00:07:25.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.102 "dma_device_type": 2 00:07:25.102 } 00:07:25.102 ], 00:07:25.102 "driver_specific": {} 00:07:25.102 } 00:07:25.102 ] 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.102 "name": "Existed_Raid", 00:07:25.102 "uuid": "1401c1cb-2034-43d4-b534-fbd66b95b4be", 00:07:25.102 "strip_size_kb": 64, 00:07:25.102 "state": "configuring", 00:07:25.102 "raid_level": "raid0", 00:07:25.102 "superblock": true, 00:07:25.102 "num_base_bdevs": 2, 00:07:25.102 "num_base_bdevs_discovered": 1, 00:07:25.102 "num_base_bdevs_operational": 2, 00:07:25.102 "base_bdevs_list": [ 00:07:25.102 { 00:07:25.102 "name": "BaseBdev1", 00:07:25.102 "uuid": "f2135d95-6ac7-4e3b-a32a-6eb55679291a", 00:07:25.102 "is_configured": true, 00:07:25.102 "data_offset": 2048, 00:07:25.102 "data_size": 63488 00:07:25.102 }, 00:07:25.102 { 00:07:25.102 "name": "BaseBdev2", 00:07:25.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.102 "is_configured": false, 00:07:25.102 "data_offset": 0, 00:07:25.102 "data_size": 0 00:07:25.102 } 00:07:25.102 ] 00:07:25.102 }' 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.102 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.386 [2024-11-26 17:52:07.193048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:25.386 [2024-11-26 17:52:07.193114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.386 [2024-11-26 17:52:07.205166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.386 [2024-11-26 17:52:07.207398] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.386 [2024-11-26 17:52:07.207444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.386 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.645 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.645 "name": "Existed_Raid", 00:07:25.645 "uuid": "4dc00c9d-3603-4afa-9acf-99a441592b1e", 00:07:25.645 "strip_size_kb": 64, 00:07:25.645 "state": "configuring", 00:07:25.645 "raid_level": "raid0", 00:07:25.645 "superblock": true, 00:07:25.645 "num_base_bdevs": 2, 00:07:25.645 "num_base_bdevs_discovered": 1, 00:07:25.645 "num_base_bdevs_operational": 2, 00:07:25.645 "base_bdevs_list": [ 00:07:25.645 { 00:07:25.645 "name": "BaseBdev1", 00:07:25.645 "uuid": "f2135d95-6ac7-4e3b-a32a-6eb55679291a", 00:07:25.645 "is_configured": true, 00:07:25.645 "data_offset": 2048, 00:07:25.645 "data_size": 63488 00:07:25.645 }, 00:07:25.645 { 00:07:25.645 "name": "BaseBdev2", 00:07:25.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.645 "is_configured": false, 00:07:25.645 "data_offset": 0, 00:07:25.645 "data_size": 0 00:07:25.645 } 00:07:25.645 ] 00:07:25.646 }' 00:07:25.646 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.646 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.904 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:25.904 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.904 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.904 [2024-11-26 17:52:07.720234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:25.904 [2024-11-26 17:52:07.720537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:25.904 [2024-11-26 17:52:07.720603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:25.904 [2024-11-26 17:52:07.720926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:25.904 BaseBdev2 00:07:25.904 [2024-11-26 17:52:07.721151] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:25.904 [2024-11-26 17:52:07.721171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:25.904 [2024-11-26 17:52:07.721359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.904 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.904 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:25.904 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:25.904 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:25.904 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:25.904 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:25.904 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:25.904 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:25.904 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.905 [ 00:07:25.905 { 00:07:25.905 "name": "BaseBdev2", 00:07:25.905 "aliases": [ 00:07:25.905 "61aad271-e10a-4351-a492-4dde0c2419ce" 00:07:25.905 ], 00:07:25.905 "product_name": "Malloc disk", 00:07:25.905 "block_size": 512, 00:07:25.905 "num_blocks": 65536, 00:07:25.905 "uuid": "61aad271-e10a-4351-a492-4dde0c2419ce", 00:07:25.905 "assigned_rate_limits": { 00:07:25.905 "rw_ios_per_sec": 0, 00:07:25.905 "rw_mbytes_per_sec": 0, 00:07:25.905 "r_mbytes_per_sec": 0, 00:07:25.905 "w_mbytes_per_sec": 0 00:07:25.905 }, 00:07:25.905 "claimed": true, 00:07:25.905 "claim_type": "exclusive_write", 00:07:25.905 "zoned": false, 00:07:25.905 "supported_io_types": { 00:07:25.905 "read": true, 00:07:25.905 "write": true, 00:07:25.905 "unmap": true, 00:07:25.905 "flush": true, 00:07:25.905 "reset": true, 00:07:25.905 "nvme_admin": false, 00:07:25.905 "nvme_io": false, 00:07:25.905 "nvme_io_md": false, 00:07:25.905 "write_zeroes": true, 00:07:25.905 "zcopy": true, 00:07:25.905 "get_zone_info": false, 00:07:25.905 "zone_management": false, 00:07:25.905 "zone_append": false, 00:07:25.905 "compare": false, 00:07:25.905 "compare_and_write": false, 00:07:25.905 "abort": true, 00:07:25.905 "seek_hole": false, 00:07:25.905 "seek_data": false, 00:07:25.905 "copy": true, 00:07:25.905 "nvme_iov_md": false 00:07:25.905 }, 00:07:25.905 "memory_domains": [ 00:07:25.905 { 00:07:25.905 "dma_device_id": "system", 00:07:25.905 "dma_device_type": 1 00:07:25.905 }, 00:07:25.905 { 00:07:25.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.905 "dma_device_type": 2 00:07:25.905 } 00:07:25.905 ], 00:07:25.905 "driver_specific": {} 00:07:25.905 } 00:07:25.905 ] 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.905 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.164 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.164 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.164 "name": "Existed_Raid", 00:07:26.164 "uuid": "4dc00c9d-3603-4afa-9acf-99a441592b1e", 00:07:26.164 "strip_size_kb": 64, 00:07:26.164 "state": "online", 00:07:26.164 "raid_level": "raid0", 00:07:26.164 "superblock": true, 00:07:26.164 "num_base_bdevs": 2, 00:07:26.164 "num_base_bdevs_discovered": 2, 00:07:26.164 "num_base_bdevs_operational": 2, 00:07:26.164 "base_bdevs_list": [ 00:07:26.164 { 00:07:26.164 "name": "BaseBdev1", 00:07:26.164 "uuid": "f2135d95-6ac7-4e3b-a32a-6eb55679291a", 00:07:26.164 "is_configured": true, 00:07:26.164 "data_offset": 2048, 00:07:26.164 "data_size": 63488 00:07:26.164 }, 00:07:26.164 { 00:07:26.164 "name": "BaseBdev2", 00:07:26.164 "uuid": "61aad271-e10a-4351-a492-4dde0c2419ce", 00:07:26.164 "is_configured": true, 00:07:26.164 "data_offset": 2048, 00:07:26.164 "data_size": 63488 00:07:26.164 } 00:07:26.164 ] 00:07:26.164 }' 00:07:26.164 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.164 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.424 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:26.425 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:26.425 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.425 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.425 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.425 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.425 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:26.425 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.425 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.425 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.425 [2024-11-26 17:52:08.263692] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.425 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.685 "name": "Existed_Raid", 00:07:26.685 "aliases": [ 00:07:26.685 "4dc00c9d-3603-4afa-9acf-99a441592b1e" 00:07:26.685 ], 00:07:26.685 "product_name": "Raid Volume", 00:07:26.685 "block_size": 512, 00:07:26.685 "num_blocks": 126976, 00:07:26.685 "uuid": "4dc00c9d-3603-4afa-9acf-99a441592b1e", 00:07:26.685 "assigned_rate_limits": { 00:07:26.685 "rw_ios_per_sec": 0, 00:07:26.685 "rw_mbytes_per_sec": 0, 00:07:26.685 "r_mbytes_per_sec": 0, 00:07:26.685 "w_mbytes_per_sec": 0 00:07:26.685 }, 00:07:26.685 "claimed": false, 00:07:26.685 "zoned": false, 00:07:26.685 "supported_io_types": { 00:07:26.685 "read": true, 00:07:26.685 "write": true, 00:07:26.685 "unmap": true, 00:07:26.685 "flush": true, 00:07:26.685 "reset": true, 00:07:26.685 "nvme_admin": false, 00:07:26.685 "nvme_io": false, 00:07:26.685 "nvme_io_md": false, 00:07:26.685 "write_zeroes": true, 00:07:26.685 "zcopy": false, 00:07:26.685 "get_zone_info": false, 00:07:26.685 "zone_management": false, 00:07:26.685 "zone_append": false, 00:07:26.685 "compare": false, 00:07:26.685 "compare_and_write": false, 00:07:26.685 "abort": false, 00:07:26.685 "seek_hole": false, 00:07:26.685 "seek_data": false, 00:07:26.685 "copy": false, 00:07:26.685 "nvme_iov_md": false 00:07:26.685 }, 00:07:26.685 "memory_domains": [ 00:07:26.685 { 00:07:26.685 "dma_device_id": "system", 00:07:26.685 "dma_device_type": 1 00:07:26.685 }, 00:07:26.685 { 00:07:26.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.685 "dma_device_type": 2 00:07:26.685 }, 00:07:26.685 { 00:07:26.685 "dma_device_id": "system", 00:07:26.685 "dma_device_type": 1 00:07:26.685 }, 00:07:26.685 { 00:07:26.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.685 "dma_device_type": 2 00:07:26.685 } 00:07:26.685 ], 00:07:26.685 "driver_specific": { 00:07:26.685 "raid": { 00:07:26.685 "uuid": "4dc00c9d-3603-4afa-9acf-99a441592b1e", 00:07:26.685 "strip_size_kb": 64, 00:07:26.685 "state": "online", 00:07:26.685 "raid_level": "raid0", 00:07:26.685 "superblock": true, 00:07:26.685 "num_base_bdevs": 2, 00:07:26.685 "num_base_bdevs_discovered": 2, 00:07:26.685 "num_base_bdevs_operational": 2, 00:07:26.685 "base_bdevs_list": [ 00:07:26.685 { 00:07:26.685 "name": "BaseBdev1", 00:07:26.685 "uuid": "f2135d95-6ac7-4e3b-a32a-6eb55679291a", 00:07:26.685 "is_configured": true, 00:07:26.685 "data_offset": 2048, 00:07:26.685 "data_size": 63488 00:07:26.685 }, 00:07:26.685 { 00:07:26.685 "name": "BaseBdev2", 00:07:26.685 "uuid": "61aad271-e10a-4351-a492-4dde0c2419ce", 00:07:26.685 "is_configured": true, 00:07:26.685 "data_offset": 2048, 00:07:26.685 "data_size": 63488 00:07:26.685 } 00:07:26.685 ] 00:07:26.685 } 00:07:26.685 } 00:07:26.685 }' 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:26.685 BaseBdev2' 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.685 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.685 [2024-11-26 17:52:08.511003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:26.686 [2024-11-26 17:52:08.511055] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.686 [2024-11-26 17:52:08.511114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.946 "name": "Existed_Raid", 00:07:26.946 "uuid": "4dc00c9d-3603-4afa-9acf-99a441592b1e", 00:07:26.946 "strip_size_kb": 64, 00:07:26.946 "state": "offline", 00:07:26.946 "raid_level": "raid0", 00:07:26.946 "superblock": true, 00:07:26.946 "num_base_bdevs": 2, 00:07:26.946 "num_base_bdevs_discovered": 1, 00:07:26.946 "num_base_bdevs_operational": 1, 00:07:26.946 "base_bdevs_list": [ 00:07:26.946 { 00:07:26.946 "name": null, 00:07:26.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.946 "is_configured": false, 00:07:26.946 "data_offset": 0, 00:07:26.946 "data_size": 63488 00:07:26.946 }, 00:07:26.946 { 00:07:26.946 "name": "BaseBdev2", 00:07:26.946 "uuid": "61aad271-e10a-4351-a492-4dde0c2419ce", 00:07:26.946 "is_configured": true, 00:07:26.946 "data_offset": 2048, 00:07:26.946 "data_size": 63488 00:07:26.946 } 00:07:26.946 ] 00:07:26.946 }' 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.946 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.206 17:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:27.206 17:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:27.206 17:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.206 17:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:27.206 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.206 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.467 [2024-11-26 17:52:09.104195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:27.467 [2024-11-26 17:52:09.104304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61124 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61124 ']' 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61124 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61124 00:07:27.467 killing process with pid 61124 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61124' 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61124 00:07:27.467 [2024-11-26 17:52:09.275182] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.467 17:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61124 00:07:27.467 [2024-11-26 17:52:09.292819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.848 17:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:28.848 ************************************ 00:07:28.848 END TEST raid_state_function_test_sb 00:07:28.848 ************************************ 00:07:28.848 00:07:28.848 real 0m5.381s 00:07:28.848 user 0m7.818s 00:07:28.848 sys 0m0.878s 00:07:28.848 17:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.848 17:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.848 17:52:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:28.848 17:52:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:28.848 17:52:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.848 17:52:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.848 ************************************ 00:07:28.848 START TEST raid_superblock_test 00:07:28.848 ************************************ 00:07:28.848 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:28.848 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:28.848 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:28.848 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:28.848 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:28.848 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:28.848 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:28.848 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:28.848 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:28.848 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:28.848 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:28.848 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:28.848 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:28.848 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:28.848 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:28.849 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:28.849 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:28.849 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61382 00:07:28.849 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:28.849 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61382 00:07:28.849 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61382 ']' 00:07:28.849 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.849 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.849 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.849 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.849 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.849 [2024-11-26 17:52:10.651954] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:07:28.849 [2024-11-26 17:52:10.652274] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61382 ] 00:07:29.108 [2024-11-26 17:52:10.833203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.108 [2024-11-26 17:52:10.965456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.367 [2024-11-26 17:52:11.183217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.367 [2024-11-26 17:52:11.183354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.958 malloc1 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.958 [2024-11-26 17:52:11.579553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:29.958 [2024-11-26 17:52:11.579695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.958 [2024-11-26 17:52:11.579740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:29.958 [2024-11-26 17:52:11.579811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.958 [2024-11-26 17:52:11.582330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.958 [2024-11-26 17:52:11.582411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:29.958 pt1 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.958 malloc2 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.958 [2024-11-26 17:52:11.637510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:29.958 [2024-11-26 17:52:11.637623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.958 [2024-11-26 17:52:11.637673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:29.958 [2024-11-26 17:52:11.637746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.958 [2024-11-26 17:52:11.640125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.958 [2024-11-26 17:52:11.640219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:29.958 pt2 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.958 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.959 [2024-11-26 17:52:11.649546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:29.959 [2024-11-26 17:52:11.651491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:29.959 [2024-11-26 17:52:11.651654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:29.959 [2024-11-26 17:52:11.651667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:29.959 [2024-11-26 17:52:11.651925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:29.959 [2024-11-26 17:52:11.652101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:29.959 [2024-11-26 17:52:11.652114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:29.959 [2024-11-26 17:52:11.652303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.959 "name": "raid_bdev1", 00:07:29.959 "uuid": "1730a263-2914-4a59-8063-382677a46eb9", 00:07:29.959 "strip_size_kb": 64, 00:07:29.959 "state": "online", 00:07:29.959 "raid_level": "raid0", 00:07:29.959 "superblock": true, 00:07:29.959 "num_base_bdevs": 2, 00:07:29.959 "num_base_bdevs_discovered": 2, 00:07:29.959 "num_base_bdevs_operational": 2, 00:07:29.959 "base_bdevs_list": [ 00:07:29.959 { 00:07:29.959 "name": "pt1", 00:07:29.959 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:29.959 "is_configured": true, 00:07:29.959 "data_offset": 2048, 00:07:29.959 "data_size": 63488 00:07:29.959 }, 00:07:29.959 { 00:07:29.959 "name": "pt2", 00:07:29.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:29.959 "is_configured": true, 00:07:29.959 "data_offset": 2048, 00:07:29.959 "data_size": 63488 00:07:29.959 } 00:07:29.959 ] 00:07:29.959 }' 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.959 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.528 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:30.528 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:30.528 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:30.528 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:30.528 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:30.528 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:30.528 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:30.528 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:30.528 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.528 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.528 [2024-11-26 17:52:12.105388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.528 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.528 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:30.528 "name": "raid_bdev1", 00:07:30.528 "aliases": [ 00:07:30.528 "1730a263-2914-4a59-8063-382677a46eb9" 00:07:30.528 ], 00:07:30.528 "product_name": "Raid Volume", 00:07:30.528 "block_size": 512, 00:07:30.528 "num_blocks": 126976, 00:07:30.528 "uuid": "1730a263-2914-4a59-8063-382677a46eb9", 00:07:30.528 "assigned_rate_limits": { 00:07:30.528 "rw_ios_per_sec": 0, 00:07:30.528 "rw_mbytes_per_sec": 0, 00:07:30.528 "r_mbytes_per_sec": 0, 00:07:30.528 "w_mbytes_per_sec": 0 00:07:30.528 }, 00:07:30.528 "claimed": false, 00:07:30.528 "zoned": false, 00:07:30.528 "supported_io_types": { 00:07:30.528 "read": true, 00:07:30.528 "write": true, 00:07:30.528 "unmap": true, 00:07:30.529 "flush": true, 00:07:30.529 "reset": true, 00:07:30.529 "nvme_admin": false, 00:07:30.529 "nvme_io": false, 00:07:30.529 "nvme_io_md": false, 00:07:30.529 "write_zeroes": true, 00:07:30.529 "zcopy": false, 00:07:30.529 "get_zone_info": false, 00:07:30.529 "zone_management": false, 00:07:30.529 "zone_append": false, 00:07:30.529 "compare": false, 00:07:30.529 "compare_and_write": false, 00:07:30.529 "abort": false, 00:07:30.529 "seek_hole": false, 00:07:30.529 "seek_data": false, 00:07:30.529 "copy": false, 00:07:30.529 "nvme_iov_md": false 00:07:30.529 }, 00:07:30.529 "memory_domains": [ 00:07:30.529 { 00:07:30.529 "dma_device_id": "system", 00:07:30.529 "dma_device_type": 1 00:07:30.529 }, 00:07:30.529 { 00:07:30.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.529 "dma_device_type": 2 00:07:30.529 }, 00:07:30.529 { 00:07:30.529 "dma_device_id": "system", 00:07:30.529 "dma_device_type": 1 00:07:30.529 }, 00:07:30.529 { 00:07:30.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.529 "dma_device_type": 2 00:07:30.529 } 00:07:30.529 ], 00:07:30.529 "driver_specific": { 00:07:30.529 "raid": { 00:07:30.529 "uuid": "1730a263-2914-4a59-8063-382677a46eb9", 00:07:30.529 "strip_size_kb": 64, 00:07:30.529 "state": "online", 00:07:30.529 "raid_level": "raid0", 00:07:30.529 "superblock": true, 00:07:30.529 "num_base_bdevs": 2, 00:07:30.529 "num_base_bdevs_discovered": 2, 00:07:30.529 "num_base_bdevs_operational": 2, 00:07:30.529 "base_bdevs_list": [ 00:07:30.529 { 00:07:30.529 "name": "pt1", 00:07:30.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.529 "is_configured": true, 00:07:30.529 "data_offset": 2048, 00:07:30.529 "data_size": 63488 00:07:30.529 }, 00:07:30.529 { 00:07:30.529 "name": "pt2", 00:07:30.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.529 "is_configured": true, 00:07:30.529 "data_offset": 2048, 00:07:30.529 "data_size": 63488 00:07:30.529 } 00:07:30.529 ] 00:07:30.529 } 00:07:30.529 } 00:07:30.529 }' 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:30.529 pt2' 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.529 [2024-11-26 17:52:12.325407] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1730a263-2914-4a59-8063-382677a46eb9 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1730a263-2914-4a59-8063-382677a46eb9 ']' 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.529 [2024-11-26 17:52:12.369019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:30.529 [2024-11-26 17:52:12.369061] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.529 [2024-11-26 17:52:12.369165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.529 [2024-11-26 17:52:12.369218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.529 [2024-11-26 17:52:12.369231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.529 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.791 [2024-11-26 17:52:12.509076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:30.791 [2024-11-26 17:52:12.511167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:30.791 [2024-11-26 17:52:12.511242] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:30.791 [2024-11-26 17:52:12.511301] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:30.791 [2024-11-26 17:52:12.511318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:30.791 [2024-11-26 17:52:12.511333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:30.791 request: 00:07:30.791 { 00:07:30.791 "name": "raid_bdev1", 00:07:30.791 "raid_level": "raid0", 00:07:30.791 "base_bdevs": [ 00:07:30.791 "malloc1", 00:07:30.791 "malloc2" 00:07:30.791 ], 00:07:30.791 "strip_size_kb": 64, 00:07:30.791 "superblock": false, 00:07:30.791 "method": "bdev_raid_create", 00:07:30.791 "req_id": 1 00:07:30.791 } 00:07:30.791 Got JSON-RPC error response 00:07:30.791 response: 00:07:30.791 { 00:07:30.791 "code": -17, 00:07:30.791 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:30.791 } 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.791 [2024-11-26 17:52:12.576999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:30.791 [2024-11-26 17:52:12.577120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.791 [2024-11-26 17:52:12.577162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:30.791 [2024-11-26 17:52:12.577193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.791 [2024-11-26 17:52:12.579427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.791 [2024-11-26 17:52:12.579502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:30.791 [2024-11-26 17:52:12.579609] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:30.791 [2024-11-26 17:52:12.579699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:30.791 pt1 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.791 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.791 "name": "raid_bdev1", 00:07:30.791 "uuid": "1730a263-2914-4a59-8063-382677a46eb9", 00:07:30.791 "strip_size_kb": 64, 00:07:30.791 "state": "configuring", 00:07:30.791 "raid_level": "raid0", 00:07:30.791 "superblock": true, 00:07:30.791 "num_base_bdevs": 2, 00:07:30.791 "num_base_bdevs_discovered": 1, 00:07:30.791 "num_base_bdevs_operational": 2, 00:07:30.792 "base_bdevs_list": [ 00:07:30.792 { 00:07:30.792 "name": "pt1", 00:07:30.792 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.792 "is_configured": true, 00:07:30.792 "data_offset": 2048, 00:07:30.792 "data_size": 63488 00:07:30.792 }, 00:07:30.792 { 00:07:30.792 "name": null, 00:07:30.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.792 "is_configured": false, 00:07:30.792 "data_offset": 2048, 00:07:30.792 "data_size": 63488 00:07:30.792 } 00:07:30.792 ] 00:07:30.792 }' 00:07:30.792 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.792 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.359 [2024-11-26 17:52:13.028739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:31.359 [2024-11-26 17:52:13.028821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.359 [2024-11-26 17:52:13.028845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:31.359 [2024-11-26 17:52:13.028856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.359 [2024-11-26 17:52:13.029405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.359 [2024-11-26 17:52:13.029434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:31.359 [2024-11-26 17:52:13.029533] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:31.359 [2024-11-26 17:52:13.029562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:31.359 [2024-11-26 17:52:13.029683] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:31.359 [2024-11-26 17:52:13.029695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:31.359 [2024-11-26 17:52:13.029987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:31.359 [2024-11-26 17:52:13.030189] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:31.359 [2024-11-26 17:52:13.030255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:31.359 [2024-11-26 17:52:13.030433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.359 pt2 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.359 "name": "raid_bdev1", 00:07:31.359 "uuid": "1730a263-2914-4a59-8063-382677a46eb9", 00:07:31.359 "strip_size_kb": 64, 00:07:31.359 "state": "online", 00:07:31.359 "raid_level": "raid0", 00:07:31.359 "superblock": true, 00:07:31.359 "num_base_bdevs": 2, 00:07:31.359 "num_base_bdevs_discovered": 2, 00:07:31.359 "num_base_bdevs_operational": 2, 00:07:31.359 "base_bdevs_list": [ 00:07:31.359 { 00:07:31.359 "name": "pt1", 00:07:31.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.359 "is_configured": true, 00:07:31.359 "data_offset": 2048, 00:07:31.359 "data_size": 63488 00:07:31.359 }, 00:07:31.359 { 00:07:31.359 "name": "pt2", 00:07:31.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.359 "is_configured": true, 00:07:31.359 "data_offset": 2048, 00:07:31.359 "data_size": 63488 00:07:31.359 } 00:07:31.359 ] 00:07:31.359 }' 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.359 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.927 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:31.927 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:31.927 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:31.927 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:31.927 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:31.927 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:31.927 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:31.927 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.927 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.927 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:31.927 [2024-11-26 17:52:13.492248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.927 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.927 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:31.927 "name": "raid_bdev1", 00:07:31.927 "aliases": [ 00:07:31.927 "1730a263-2914-4a59-8063-382677a46eb9" 00:07:31.927 ], 00:07:31.927 "product_name": "Raid Volume", 00:07:31.927 "block_size": 512, 00:07:31.927 "num_blocks": 126976, 00:07:31.927 "uuid": "1730a263-2914-4a59-8063-382677a46eb9", 00:07:31.927 "assigned_rate_limits": { 00:07:31.927 "rw_ios_per_sec": 0, 00:07:31.927 "rw_mbytes_per_sec": 0, 00:07:31.927 "r_mbytes_per_sec": 0, 00:07:31.927 "w_mbytes_per_sec": 0 00:07:31.927 }, 00:07:31.927 "claimed": false, 00:07:31.927 "zoned": false, 00:07:31.927 "supported_io_types": { 00:07:31.927 "read": true, 00:07:31.927 "write": true, 00:07:31.927 "unmap": true, 00:07:31.927 "flush": true, 00:07:31.927 "reset": true, 00:07:31.927 "nvme_admin": false, 00:07:31.927 "nvme_io": false, 00:07:31.927 "nvme_io_md": false, 00:07:31.927 "write_zeroes": true, 00:07:31.927 "zcopy": false, 00:07:31.927 "get_zone_info": false, 00:07:31.927 "zone_management": false, 00:07:31.927 "zone_append": false, 00:07:31.927 "compare": false, 00:07:31.927 "compare_and_write": false, 00:07:31.927 "abort": false, 00:07:31.927 "seek_hole": false, 00:07:31.928 "seek_data": false, 00:07:31.928 "copy": false, 00:07:31.928 "nvme_iov_md": false 00:07:31.928 }, 00:07:31.928 "memory_domains": [ 00:07:31.928 { 00:07:31.928 "dma_device_id": "system", 00:07:31.928 "dma_device_type": 1 00:07:31.928 }, 00:07:31.928 { 00:07:31.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.928 "dma_device_type": 2 00:07:31.928 }, 00:07:31.928 { 00:07:31.928 "dma_device_id": "system", 00:07:31.928 "dma_device_type": 1 00:07:31.928 }, 00:07:31.928 { 00:07:31.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.928 "dma_device_type": 2 00:07:31.928 } 00:07:31.928 ], 00:07:31.928 "driver_specific": { 00:07:31.928 "raid": { 00:07:31.928 "uuid": "1730a263-2914-4a59-8063-382677a46eb9", 00:07:31.928 "strip_size_kb": 64, 00:07:31.928 "state": "online", 00:07:31.928 "raid_level": "raid0", 00:07:31.928 "superblock": true, 00:07:31.928 "num_base_bdevs": 2, 00:07:31.928 "num_base_bdevs_discovered": 2, 00:07:31.928 "num_base_bdevs_operational": 2, 00:07:31.928 "base_bdevs_list": [ 00:07:31.928 { 00:07:31.928 "name": "pt1", 00:07:31.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.928 "is_configured": true, 00:07:31.928 "data_offset": 2048, 00:07:31.928 "data_size": 63488 00:07:31.928 }, 00:07:31.928 { 00:07:31.928 "name": "pt2", 00:07:31.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.928 "is_configured": true, 00:07:31.928 "data_offset": 2048, 00:07:31.928 "data_size": 63488 00:07:31.928 } 00:07:31.928 ] 00:07:31.928 } 00:07:31.928 } 00:07:31.928 }' 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:31.928 pt2' 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:31.928 [2024-11-26 17:52:13.743782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1730a263-2914-4a59-8063-382677a46eb9 '!=' 1730a263-2914-4a59-8063-382677a46eb9 ']' 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61382 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61382 ']' 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61382 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.928 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61382 00:07:32.188 killing process with pid 61382 00:07:32.188 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.188 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.188 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61382' 00:07:32.188 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61382 00:07:32.188 [2024-11-26 17:52:13.805873] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:32.188 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61382 00:07:32.188 [2024-11-26 17:52:13.805982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.188 [2024-11-26 17:52:13.806051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.188 [2024-11-26 17:52:13.806102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:32.188 [2024-11-26 17:52:14.024208] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:33.568 17:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:33.568 00:07:33.568 real 0m4.723s 00:07:33.568 user 0m6.598s 00:07:33.568 sys 0m0.757s 00:07:33.568 17:52:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.568 17:52:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.568 ************************************ 00:07:33.568 END TEST raid_superblock_test 00:07:33.568 ************************************ 00:07:33.568 17:52:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:33.568 17:52:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:33.568 17:52:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.568 17:52:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:33.568 ************************************ 00:07:33.568 START TEST raid_read_error_test 00:07:33.568 ************************************ 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sB2EeZneSs 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61593 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61593 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61593 ']' 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.568 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:33.828 [2024-11-26 17:52:15.438841] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:07:33.828 [2024-11-26 17:52:15.438994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61593 ] 00:07:33.828 [2024-11-26 17:52:15.618233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.088 [2024-11-26 17:52:15.752955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.347 [2024-11-26 17:52:15.980659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.347 [2024-11-26 17:52:15.980736] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.606 BaseBdev1_malloc 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.606 true 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.606 [2024-11-26 17:52:16.397139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:34.606 [2024-11-26 17:52:16.397207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.606 [2024-11-26 17:52:16.397232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:34.606 [2024-11-26 17:52:16.397245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.606 [2024-11-26 17:52:16.399787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.606 [2024-11-26 17:52:16.399836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:34.606 BaseBdev1 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.606 BaseBdev2_malloc 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.606 true 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.606 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.866 [2024-11-26 17:52:16.469649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:34.866 [2024-11-26 17:52:16.469718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.866 [2024-11-26 17:52:16.469739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:34.866 [2024-11-26 17:52:16.469753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.866 [2024-11-26 17:52:16.472315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.866 [2024-11-26 17:52:16.472366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:34.866 BaseBdev2 00:07:34.866 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.866 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:34.866 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.866 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.866 [2024-11-26 17:52:16.481724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:34.866 [2024-11-26 17:52:16.484274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:34.866 [2024-11-26 17:52:16.484539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:34.866 [2024-11-26 17:52:16.484562] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:34.866 [2024-11-26 17:52:16.484894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:34.866 [2024-11-26 17:52:16.485210] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:34.866 [2024-11-26 17:52:16.485265] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:34.866 [2024-11-26 17:52:16.485583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.866 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.866 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:34.866 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.866 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.866 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.866 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.866 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.866 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.866 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.867 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.867 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.867 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.867 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.867 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.867 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.867 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.867 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.867 "name": "raid_bdev1", 00:07:34.867 "uuid": "8413e09e-c891-4f00-a8f8-09620a0e5915", 00:07:34.867 "strip_size_kb": 64, 00:07:34.867 "state": "online", 00:07:34.867 "raid_level": "raid0", 00:07:34.867 "superblock": true, 00:07:34.867 "num_base_bdevs": 2, 00:07:34.867 "num_base_bdevs_discovered": 2, 00:07:34.867 "num_base_bdevs_operational": 2, 00:07:34.867 "base_bdevs_list": [ 00:07:34.867 { 00:07:34.867 "name": "BaseBdev1", 00:07:34.867 "uuid": "d72caebf-80e9-56cd-8b34-ebf35c548ad7", 00:07:34.867 "is_configured": true, 00:07:34.867 "data_offset": 2048, 00:07:34.867 "data_size": 63488 00:07:34.867 }, 00:07:34.867 { 00:07:34.867 "name": "BaseBdev2", 00:07:34.867 "uuid": "8a6c978c-9481-5803-b71f-5869ce99192c", 00:07:34.867 "is_configured": true, 00:07:34.867 "data_offset": 2048, 00:07:34.867 "data_size": 63488 00:07:34.867 } 00:07:34.867 ] 00:07:34.867 }' 00:07:34.867 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.867 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.126 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:35.126 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:35.386 [2024-11-26 17:52:17.034553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.323 "name": "raid_bdev1", 00:07:36.323 "uuid": "8413e09e-c891-4f00-a8f8-09620a0e5915", 00:07:36.323 "strip_size_kb": 64, 00:07:36.323 "state": "online", 00:07:36.323 "raid_level": "raid0", 00:07:36.323 "superblock": true, 00:07:36.323 "num_base_bdevs": 2, 00:07:36.323 "num_base_bdevs_discovered": 2, 00:07:36.323 "num_base_bdevs_operational": 2, 00:07:36.323 "base_bdevs_list": [ 00:07:36.323 { 00:07:36.323 "name": "BaseBdev1", 00:07:36.323 "uuid": "d72caebf-80e9-56cd-8b34-ebf35c548ad7", 00:07:36.323 "is_configured": true, 00:07:36.323 "data_offset": 2048, 00:07:36.323 "data_size": 63488 00:07:36.323 }, 00:07:36.323 { 00:07:36.323 "name": "BaseBdev2", 00:07:36.323 "uuid": "8a6c978c-9481-5803-b71f-5869ce99192c", 00:07:36.323 "is_configured": true, 00:07:36.323 "data_offset": 2048, 00:07:36.323 "data_size": 63488 00:07:36.323 } 00:07:36.323 ] 00:07:36.323 }' 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.323 17:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.583 17:52:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:36.583 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.583 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.583 [2024-11-26 17:52:18.402826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:36.583 [2024-11-26 17:52:18.402917] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:36.583 [2024-11-26 17:52:18.406011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:36.583 [2024-11-26 17:52:18.406136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.583 [2024-11-26 17:52:18.406206] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:36.583 [2024-11-26 17:52:18.406258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:36.583 { 00:07:36.583 "results": [ 00:07:36.583 { 00:07:36.583 "job": "raid_bdev1", 00:07:36.583 "core_mask": "0x1", 00:07:36.583 "workload": "randrw", 00:07:36.583 "percentage": 50, 00:07:36.583 "status": "finished", 00:07:36.583 "queue_depth": 1, 00:07:36.583 "io_size": 131072, 00:07:36.583 "runtime": 1.369172, 00:07:36.583 "iops": 14455.451908160552, 00:07:36.583 "mibps": 1806.931488520069, 00:07:36.583 "io_failed": 1, 00:07:36.583 "io_timeout": 0, 00:07:36.583 "avg_latency_us": 95.59664554338273, 00:07:36.583 "min_latency_us": 26.829694323144103, 00:07:36.583 "max_latency_us": 1574.0087336244542 00:07:36.583 } 00:07:36.583 ], 00:07:36.583 "core_count": 1 00:07:36.583 } 00:07:36.583 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.583 17:52:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61593 00:07:36.583 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61593 ']' 00:07:36.583 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61593 00:07:36.583 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:36.583 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.583 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61593 00:07:36.843 killing process with pid 61593 00:07:36.843 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.843 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.843 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61593' 00:07:36.843 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61593 00:07:36.843 [2024-11-26 17:52:18.455708] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:36.843 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61593 00:07:36.843 [2024-11-26 17:52:18.609628] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.223 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sB2EeZneSs 00:07:38.223 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:38.223 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:38.223 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:38.223 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:38.223 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.223 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.223 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:38.223 00:07:38.223 real 0m4.548s 00:07:38.223 user 0m5.474s 00:07:38.223 sys 0m0.577s 00:07:38.223 17:52:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.223 17:52:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 ************************************ 00:07:38.223 END TEST raid_read_error_test 00:07:38.223 ************************************ 00:07:38.223 17:52:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:38.223 17:52:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:38.223 17:52:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.223 17:52:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 ************************************ 00:07:38.223 START TEST raid_write_error_test 00:07:38.223 ************************************ 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BWUStPYnyE 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61739 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61739 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61739 ']' 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.223 17:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 [2024-11-26 17:52:20.053652] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:07:38.223 [2024-11-26 17:52:20.053871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61739 ] 00:07:38.483 [2024-11-26 17:52:20.209377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.483 [2024-11-26 17:52:20.328087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.761 [2024-11-26 17:52:20.538307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.761 [2024-11-26 17:52:20.538443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.330 BaseBdev1_malloc 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.330 true 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.330 [2024-11-26 17:52:20.980577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:39.330 [2024-11-26 17:52:20.980636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.330 [2024-11-26 17:52:20.980656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:39.330 [2024-11-26 17:52:20.980667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.330 [2024-11-26 17:52:20.982868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.330 [2024-11-26 17:52:20.982909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:39.330 BaseBdev1 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.330 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.330 BaseBdev2_malloc 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.330 true 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.330 [2024-11-26 17:52:21.049090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:39.330 [2024-11-26 17:52:21.049216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.330 [2024-11-26 17:52:21.049255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:39.330 [2024-11-26 17:52:21.049317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.330 [2024-11-26 17:52:21.051508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.330 [2024-11-26 17:52:21.051585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:39.330 BaseBdev2 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.330 [2024-11-26 17:52:21.061130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.330 [2024-11-26 17:52:21.062946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.330 [2024-11-26 17:52:21.063206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:39.330 [2024-11-26 17:52:21.063266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.330 [2024-11-26 17:52:21.063524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:39.330 [2024-11-26 17:52:21.063732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:39.330 [2024-11-26 17:52:21.063780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:39.330 [2024-11-26 17:52:21.063977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.330 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.331 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.331 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.331 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.331 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.331 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.331 "name": "raid_bdev1", 00:07:39.331 "uuid": "32c311d7-38af-442a-a711-36fb2020b429", 00:07:39.331 "strip_size_kb": 64, 00:07:39.331 "state": "online", 00:07:39.331 "raid_level": "raid0", 00:07:39.331 "superblock": true, 00:07:39.331 "num_base_bdevs": 2, 00:07:39.331 "num_base_bdevs_discovered": 2, 00:07:39.331 "num_base_bdevs_operational": 2, 00:07:39.331 "base_bdevs_list": [ 00:07:39.331 { 00:07:39.331 "name": "BaseBdev1", 00:07:39.331 "uuid": "3468a7c3-7568-5954-b169-f4293c88d257", 00:07:39.331 "is_configured": true, 00:07:39.331 "data_offset": 2048, 00:07:39.331 "data_size": 63488 00:07:39.331 }, 00:07:39.331 { 00:07:39.331 "name": "BaseBdev2", 00:07:39.331 "uuid": "23e91697-32e8-5a09-b663-4ab8a366e0ef", 00:07:39.331 "is_configured": true, 00:07:39.331 "data_offset": 2048, 00:07:39.331 "data_size": 63488 00:07:39.331 } 00:07:39.331 ] 00:07:39.331 }' 00:07:39.331 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.331 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.901 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:39.901 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:39.901 [2024-11-26 17:52:21.621786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.841 "name": "raid_bdev1", 00:07:40.841 "uuid": "32c311d7-38af-442a-a711-36fb2020b429", 00:07:40.841 "strip_size_kb": 64, 00:07:40.841 "state": "online", 00:07:40.841 "raid_level": "raid0", 00:07:40.841 "superblock": true, 00:07:40.841 "num_base_bdevs": 2, 00:07:40.841 "num_base_bdevs_discovered": 2, 00:07:40.841 "num_base_bdevs_operational": 2, 00:07:40.841 "base_bdevs_list": [ 00:07:40.841 { 00:07:40.841 "name": "BaseBdev1", 00:07:40.841 "uuid": "3468a7c3-7568-5954-b169-f4293c88d257", 00:07:40.841 "is_configured": true, 00:07:40.841 "data_offset": 2048, 00:07:40.841 "data_size": 63488 00:07:40.841 }, 00:07:40.841 { 00:07:40.841 "name": "BaseBdev2", 00:07:40.841 "uuid": "23e91697-32e8-5a09-b663-4ab8a366e0ef", 00:07:40.841 "is_configured": true, 00:07:40.841 "data_offset": 2048, 00:07:40.841 "data_size": 63488 00:07:40.841 } 00:07:40.841 ] 00:07:40.841 }' 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.841 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.410 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.410 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.410 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.410 [2024-11-26 17:52:22.994662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.410 [2024-11-26 17:52:22.994805] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.410 [2024-11-26 17:52:22.997848] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.410 [2024-11-26 17:52:22.997989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.410 [2024-11-26 17:52:22.998065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.410 [2024-11-26 17:52:22.998148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:41.410 { 00:07:41.410 "results": [ 00:07:41.410 { 00:07:41.410 "job": "raid_bdev1", 00:07:41.410 "core_mask": "0x1", 00:07:41.410 "workload": "randrw", 00:07:41.410 "percentage": 50, 00:07:41.410 "status": "finished", 00:07:41.410 "queue_depth": 1, 00:07:41.410 "io_size": 131072, 00:07:41.410 "runtime": 1.37368, 00:07:41.410 "iops": 14651.156018869024, 00:07:41.410 "mibps": 1831.394502358628, 00:07:41.410 "io_failed": 1, 00:07:41.410 "io_timeout": 0, 00:07:41.410 "avg_latency_us": 94.27898920457713, 00:07:41.410 "min_latency_us": 26.941484716157206, 00:07:41.410 "max_latency_us": 1495.3082969432314 00:07:41.410 } 00:07:41.410 ], 00:07:41.410 "core_count": 1 00:07:41.410 } 00:07:41.410 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.410 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61739 00:07:41.410 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61739 ']' 00:07:41.410 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61739 00:07:41.410 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:41.410 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.410 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61739 00:07:41.410 killing process with pid 61739 00:07:41.410 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.410 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.410 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61739' 00:07:41.410 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61739 00:07:41.410 [2024-11-26 17:52:23.043938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.410 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61739 00:07:41.410 [2024-11-26 17:52:23.183394] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.790 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BWUStPYnyE 00:07:42.790 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:42.790 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:42.790 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:42.790 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:42.790 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:42.790 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:42.790 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:42.790 00:07:42.790 real 0m4.503s 00:07:42.790 user 0m5.443s 00:07:42.790 sys 0m0.516s 00:07:42.790 17:52:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.790 17:52:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.790 ************************************ 00:07:42.790 END TEST raid_write_error_test 00:07:42.790 ************************************ 00:07:42.790 17:52:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:42.790 17:52:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:42.790 17:52:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:42.790 17:52:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.790 17:52:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.790 ************************************ 00:07:42.790 START TEST raid_state_function_test 00:07:42.790 ************************************ 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:42.790 Process raid pid: 61877 00:07:42.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61877 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61877' 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61877 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61877 ']' 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.790 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:42.790 [2024-11-26 17:52:24.611599] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:07:42.790 [2024-11-26 17:52:24.611717] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.052 [2024-11-26 17:52:24.790636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.311 [2024-11-26 17:52:24.921880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.311 [2024-11-26 17:52:25.141130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.311 [2024-11-26 17:52:25.141174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.876 [2024-11-26 17:52:25.479409] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:43.876 [2024-11-26 17:52:25.479461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:43.876 [2024-11-26 17:52:25.479473] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.876 [2024-11-26 17:52:25.479482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.876 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.876 "name": "Existed_Raid", 00:07:43.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.877 "strip_size_kb": 64, 00:07:43.877 "state": "configuring", 00:07:43.877 "raid_level": "concat", 00:07:43.877 "superblock": false, 00:07:43.877 "num_base_bdevs": 2, 00:07:43.877 "num_base_bdevs_discovered": 0, 00:07:43.877 "num_base_bdevs_operational": 2, 00:07:43.877 "base_bdevs_list": [ 00:07:43.877 { 00:07:43.877 "name": "BaseBdev1", 00:07:43.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.877 "is_configured": false, 00:07:43.877 "data_offset": 0, 00:07:43.877 "data_size": 0 00:07:43.877 }, 00:07:43.877 { 00:07:43.877 "name": "BaseBdev2", 00:07:43.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.877 "is_configured": false, 00:07:43.877 "data_offset": 0, 00:07:43.877 "data_size": 0 00:07:43.877 } 00:07:43.877 ] 00:07:43.877 }' 00:07:43.877 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.877 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.136 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.136 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.136 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.136 [2024-11-26 17:52:25.934570] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.136 [2024-11-26 17:52:25.934660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:44.136 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.137 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.137 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.137 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.137 [2024-11-26 17:52:25.946549] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.137 [2024-11-26 17:52:25.946631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.137 [2024-11-26 17:52:25.946660] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.137 [2024-11-26 17:52:25.946686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.137 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.137 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:44.137 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.137 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.137 [2024-11-26 17:52:25.996979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.396 BaseBdev1 00:07:44.396 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.396 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:44.396 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:44.396 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.396 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:44.396 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.396 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.396 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:44.396 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.396 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.396 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.396 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.396 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.396 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.396 [ 00:07:44.396 { 00:07:44.396 "name": "BaseBdev1", 00:07:44.396 "aliases": [ 00:07:44.396 "b4aa261c-d826-4dd3-81f6-eba033097a60" 00:07:44.396 ], 00:07:44.396 "product_name": "Malloc disk", 00:07:44.396 "block_size": 512, 00:07:44.396 "num_blocks": 65536, 00:07:44.396 "uuid": "b4aa261c-d826-4dd3-81f6-eba033097a60", 00:07:44.396 "assigned_rate_limits": { 00:07:44.396 "rw_ios_per_sec": 0, 00:07:44.396 "rw_mbytes_per_sec": 0, 00:07:44.396 "r_mbytes_per_sec": 0, 00:07:44.396 "w_mbytes_per_sec": 0 00:07:44.396 }, 00:07:44.396 "claimed": true, 00:07:44.396 "claim_type": "exclusive_write", 00:07:44.396 "zoned": false, 00:07:44.396 "supported_io_types": { 00:07:44.396 "read": true, 00:07:44.396 "write": true, 00:07:44.396 "unmap": true, 00:07:44.396 "flush": true, 00:07:44.396 "reset": true, 00:07:44.396 "nvme_admin": false, 00:07:44.396 "nvme_io": false, 00:07:44.396 "nvme_io_md": false, 00:07:44.396 "write_zeroes": true, 00:07:44.396 "zcopy": true, 00:07:44.396 "get_zone_info": false, 00:07:44.396 "zone_management": false, 00:07:44.396 "zone_append": false, 00:07:44.396 "compare": false, 00:07:44.396 "compare_and_write": false, 00:07:44.397 "abort": true, 00:07:44.397 "seek_hole": false, 00:07:44.397 "seek_data": false, 00:07:44.397 "copy": true, 00:07:44.397 "nvme_iov_md": false 00:07:44.397 }, 00:07:44.397 "memory_domains": [ 00:07:44.397 { 00:07:44.397 "dma_device_id": "system", 00:07:44.397 "dma_device_type": 1 00:07:44.397 }, 00:07:44.397 { 00:07:44.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.397 "dma_device_type": 2 00:07:44.397 } 00:07:44.397 ], 00:07:44.397 "driver_specific": {} 00:07:44.397 } 00:07:44.397 ] 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.397 "name": "Existed_Raid", 00:07:44.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.397 "strip_size_kb": 64, 00:07:44.397 "state": "configuring", 00:07:44.397 "raid_level": "concat", 00:07:44.397 "superblock": false, 00:07:44.397 "num_base_bdevs": 2, 00:07:44.397 "num_base_bdevs_discovered": 1, 00:07:44.397 "num_base_bdevs_operational": 2, 00:07:44.397 "base_bdevs_list": [ 00:07:44.397 { 00:07:44.397 "name": "BaseBdev1", 00:07:44.397 "uuid": "b4aa261c-d826-4dd3-81f6-eba033097a60", 00:07:44.397 "is_configured": true, 00:07:44.397 "data_offset": 0, 00:07:44.397 "data_size": 65536 00:07:44.397 }, 00:07:44.397 { 00:07:44.397 "name": "BaseBdev2", 00:07:44.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.397 "is_configured": false, 00:07:44.397 "data_offset": 0, 00:07:44.397 "data_size": 0 00:07:44.397 } 00:07:44.397 ] 00:07:44.397 }' 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.397 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.657 [2024-11-26 17:52:26.468237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.657 [2024-11-26 17:52:26.468367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.657 [2024-11-26 17:52:26.476257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.657 [2024-11-26 17:52:26.478136] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.657 [2024-11-26 17:52:26.478235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.657 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.916 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.916 "name": "Existed_Raid", 00:07:44.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.916 "strip_size_kb": 64, 00:07:44.916 "state": "configuring", 00:07:44.916 "raid_level": "concat", 00:07:44.916 "superblock": false, 00:07:44.916 "num_base_bdevs": 2, 00:07:44.916 "num_base_bdevs_discovered": 1, 00:07:44.916 "num_base_bdevs_operational": 2, 00:07:44.916 "base_bdevs_list": [ 00:07:44.916 { 00:07:44.916 "name": "BaseBdev1", 00:07:44.916 "uuid": "b4aa261c-d826-4dd3-81f6-eba033097a60", 00:07:44.916 "is_configured": true, 00:07:44.916 "data_offset": 0, 00:07:44.916 "data_size": 65536 00:07:44.916 }, 00:07:44.916 { 00:07:44.916 "name": "BaseBdev2", 00:07:44.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.916 "is_configured": false, 00:07:44.916 "data_offset": 0, 00:07:44.916 "data_size": 0 00:07:44.916 } 00:07:44.916 ] 00:07:44.916 }' 00:07:44.916 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.916 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.177 [2024-11-26 17:52:26.960301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:45.177 [2024-11-26 17:52:26.960423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:45.177 [2024-11-26 17:52:26.960449] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:45.177 [2024-11-26 17:52:26.960762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:45.177 [2024-11-26 17:52:26.961024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:45.177 [2024-11-26 17:52:26.961091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:45.177 [2024-11-26 17:52:26.961430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.177 BaseBdev2 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.177 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.177 [ 00:07:45.177 { 00:07:45.177 "name": "BaseBdev2", 00:07:45.177 "aliases": [ 00:07:45.177 "bf1f05da-7c89-40f4-b809-d1e855b12e7f" 00:07:45.177 ], 00:07:45.177 "product_name": "Malloc disk", 00:07:45.177 "block_size": 512, 00:07:45.177 "num_blocks": 65536, 00:07:45.177 "uuid": "bf1f05da-7c89-40f4-b809-d1e855b12e7f", 00:07:45.177 "assigned_rate_limits": { 00:07:45.177 "rw_ios_per_sec": 0, 00:07:45.177 "rw_mbytes_per_sec": 0, 00:07:45.177 "r_mbytes_per_sec": 0, 00:07:45.177 "w_mbytes_per_sec": 0 00:07:45.177 }, 00:07:45.177 "claimed": true, 00:07:45.177 "claim_type": "exclusive_write", 00:07:45.177 "zoned": false, 00:07:45.177 "supported_io_types": { 00:07:45.177 "read": true, 00:07:45.177 "write": true, 00:07:45.177 "unmap": true, 00:07:45.177 "flush": true, 00:07:45.177 "reset": true, 00:07:45.177 "nvme_admin": false, 00:07:45.177 "nvme_io": false, 00:07:45.177 "nvme_io_md": false, 00:07:45.177 "write_zeroes": true, 00:07:45.177 "zcopy": true, 00:07:45.177 "get_zone_info": false, 00:07:45.177 "zone_management": false, 00:07:45.177 "zone_append": false, 00:07:45.177 "compare": false, 00:07:45.177 "compare_and_write": false, 00:07:45.177 "abort": true, 00:07:45.177 "seek_hole": false, 00:07:45.177 "seek_data": false, 00:07:45.177 "copy": true, 00:07:45.177 "nvme_iov_md": false 00:07:45.177 }, 00:07:45.177 "memory_domains": [ 00:07:45.177 { 00:07:45.177 "dma_device_id": "system", 00:07:45.177 "dma_device_type": 1 00:07:45.177 }, 00:07:45.177 { 00:07:45.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.177 "dma_device_type": 2 00:07:45.177 } 00:07:45.177 ], 00:07:45.177 "driver_specific": {} 00:07:45.177 } 00:07:45.177 ] 00:07:45.177 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.177 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:45.177 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:45.177 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.177 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:45.177 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.177 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.177 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.177 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.177 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.177 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.178 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.178 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.178 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.178 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.178 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.178 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.178 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.178 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.437 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.437 "name": "Existed_Raid", 00:07:45.437 "uuid": "a92c2924-9e0e-4ebc-9fb0-7cb8e9933a04", 00:07:45.437 "strip_size_kb": 64, 00:07:45.437 "state": "online", 00:07:45.437 "raid_level": "concat", 00:07:45.437 "superblock": false, 00:07:45.437 "num_base_bdevs": 2, 00:07:45.437 "num_base_bdevs_discovered": 2, 00:07:45.437 "num_base_bdevs_operational": 2, 00:07:45.437 "base_bdevs_list": [ 00:07:45.437 { 00:07:45.437 "name": "BaseBdev1", 00:07:45.437 "uuid": "b4aa261c-d826-4dd3-81f6-eba033097a60", 00:07:45.437 "is_configured": true, 00:07:45.437 "data_offset": 0, 00:07:45.437 "data_size": 65536 00:07:45.437 }, 00:07:45.437 { 00:07:45.437 "name": "BaseBdev2", 00:07:45.437 "uuid": "bf1f05da-7c89-40f4-b809-d1e855b12e7f", 00:07:45.437 "is_configured": true, 00:07:45.437 "data_offset": 0, 00:07:45.437 "data_size": 65536 00:07:45.437 } 00:07:45.437 ] 00:07:45.437 }' 00:07:45.437 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.437 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.697 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:45.697 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:45.697 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:45.697 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:45.697 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:45.697 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:45.697 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:45.697 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:45.697 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.697 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.697 [2024-11-26 17:52:27.443817] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.697 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.697 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:45.697 "name": "Existed_Raid", 00:07:45.697 "aliases": [ 00:07:45.697 "a92c2924-9e0e-4ebc-9fb0-7cb8e9933a04" 00:07:45.697 ], 00:07:45.697 "product_name": "Raid Volume", 00:07:45.697 "block_size": 512, 00:07:45.697 "num_blocks": 131072, 00:07:45.697 "uuid": "a92c2924-9e0e-4ebc-9fb0-7cb8e9933a04", 00:07:45.697 "assigned_rate_limits": { 00:07:45.697 "rw_ios_per_sec": 0, 00:07:45.697 "rw_mbytes_per_sec": 0, 00:07:45.697 "r_mbytes_per_sec": 0, 00:07:45.697 "w_mbytes_per_sec": 0 00:07:45.697 }, 00:07:45.697 "claimed": false, 00:07:45.697 "zoned": false, 00:07:45.698 "supported_io_types": { 00:07:45.698 "read": true, 00:07:45.698 "write": true, 00:07:45.698 "unmap": true, 00:07:45.698 "flush": true, 00:07:45.698 "reset": true, 00:07:45.698 "nvme_admin": false, 00:07:45.698 "nvme_io": false, 00:07:45.698 "nvme_io_md": false, 00:07:45.698 "write_zeroes": true, 00:07:45.698 "zcopy": false, 00:07:45.698 "get_zone_info": false, 00:07:45.698 "zone_management": false, 00:07:45.698 "zone_append": false, 00:07:45.698 "compare": false, 00:07:45.698 "compare_and_write": false, 00:07:45.698 "abort": false, 00:07:45.698 "seek_hole": false, 00:07:45.698 "seek_data": false, 00:07:45.698 "copy": false, 00:07:45.698 "nvme_iov_md": false 00:07:45.698 }, 00:07:45.698 "memory_domains": [ 00:07:45.698 { 00:07:45.698 "dma_device_id": "system", 00:07:45.698 "dma_device_type": 1 00:07:45.698 }, 00:07:45.698 { 00:07:45.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.698 "dma_device_type": 2 00:07:45.698 }, 00:07:45.698 { 00:07:45.698 "dma_device_id": "system", 00:07:45.698 "dma_device_type": 1 00:07:45.698 }, 00:07:45.698 { 00:07:45.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.698 "dma_device_type": 2 00:07:45.698 } 00:07:45.698 ], 00:07:45.698 "driver_specific": { 00:07:45.698 "raid": { 00:07:45.698 "uuid": "a92c2924-9e0e-4ebc-9fb0-7cb8e9933a04", 00:07:45.698 "strip_size_kb": 64, 00:07:45.698 "state": "online", 00:07:45.698 "raid_level": "concat", 00:07:45.698 "superblock": false, 00:07:45.698 "num_base_bdevs": 2, 00:07:45.698 "num_base_bdevs_discovered": 2, 00:07:45.698 "num_base_bdevs_operational": 2, 00:07:45.698 "base_bdevs_list": [ 00:07:45.698 { 00:07:45.698 "name": "BaseBdev1", 00:07:45.698 "uuid": "b4aa261c-d826-4dd3-81f6-eba033097a60", 00:07:45.698 "is_configured": true, 00:07:45.698 "data_offset": 0, 00:07:45.698 "data_size": 65536 00:07:45.698 }, 00:07:45.698 { 00:07:45.698 "name": "BaseBdev2", 00:07:45.698 "uuid": "bf1f05da-7c89-40f4-b809-d1e855b12e7f", 00:07:45.698 "is_configured": true, 00:07:45.698 "data_offset": 0, 00:07:45.698 "data_size": 65536 00:07:45.698 } 00:07:45.698 ] 00:07:45.698 } 00:07:45.698 } 00:07:45.698 }' 00:07:45.698 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:45.698 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:45.698 BaseBdev2' 00:07:45.698 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.958 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.959 [2024-11-26 17:52:27.683186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:45.959 [2024-11-26 17:52:27.683268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.959 [2024-11-26 17:52:27.683360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.959 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.218 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.218 "name": "Existed_Raid", 00:07:46.218 "uuid": "a92c2924-9e0e-4ebc-9fb0-7cb8e9933a04", 00:07:46.218 "strip_size_kb": 64, 00:07:46.218 "state": "offline", 00:07:46.218 "raid_level": "concat", 00:07:46.218 "superblock": false, 00:07:46.218 "num_base_bdevs": 2, 00:07:46.218 "num_base_bdevs_discovered": 1, 00:07:46.218 "num_base_bdevs_operational": 1, 00:07:46.218 "base_bdevs_list": [ 00:07:46.218 { 00:07:46.218 "name": null, 00:07:46.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.218 "is_configured": false, 00:07:46.218 "data_offset": 0, 00:07:46.218 "data_size": 65536 00:07:46.218 }, 00:07:46.218 { 00:07:46.218 "name": "BaseBdev2", 00:07:46.218 "uuid": "bf1f05da-7c89-40f4-b809-d1e855b12e7f", 00:07:46.218 "is_configured": true, 00:07:46.218 "data_offset": 0, 00:07:46.218 "data_size": 65536 00:07:46.218 } 00:07:46.218 ] 00:07:46.218 }' 00:07:46.218 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.218 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.481 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:46.481 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.481 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.481 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.481 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.481 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:46.481 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.481 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:46.481 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:46.481 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:46.481 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.481 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.481 [2024-11-26 17:52:28.310905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:46.481 [2024-11-26 17:52:28.311046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61877 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61877 ']' 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61877 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.741 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61877 00:07:46.742 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.742 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.742 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61877' 00:07:46.742 killing process with pid 61877 00:07:46.742 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61877 00:07:46.742 [2024-11-26 17:52:28.508154] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.742 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61877 00:07:46.742 [2024-11-26 17:52:28.525576] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:48.123 00:07:48.123 real 0m5.164s 00:07:48.123 user 0m7.488s 00:07:48.123 sys 0m0.807s 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.123 ************************************ 00:07:48.123 END TEST raid_state_function_test 00:07:48.123 ************************************ 00:07:48.123 17:52:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:48.123 17:52:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:48.123 17:52:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.123 17:52:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.123 ************************************ 00:07:48.123 START TEST raid_state_function_test_sb 00:07:48.123 ************************************ 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62130 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62130' 00:07:48.123 Process raid pid: 62130 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62130 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62130 ']' 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.123 17:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.123 [2024-11-26 17:52:29.838924] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:07:48.123 [2024-11-26 17:52:29.839135] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.383 [2024-11-26 17:52:29.995182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.383 [2024-11-26 17:52:30.119478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.643 [2024-11-26 17:52:30.333148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.643 [2024-11-26 17:52:30.333219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.902 [2024-11-26 17:52:30.682507] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:48.902 [2024-11-26 17:52:30.682625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:48.902 [2024-11-26 17:52:30.682639] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:48.902 [2024-11-26 17:52:30.682649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.902 17:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.903 17:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.903 17:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.903 17:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.903 "name": "Existed_Raid", 00:07:48.903 "uuid": "7c1d6914-c24d-48a8-a559-5ca9768bf517", 00:07:48.903 "strip_size_kb": 64, 00:07:48.903 "state": "configuring", 00:07:48.903 "raid_level": "concat", 00:07:48.903 "superblock": true, 00:07:48.903 "num_base_bdevs": 2, 00:07:48.903 "num_base_bdevs_discovered": 0, 00:07:48.903 "num_base_bdevs_operational": 2, 00:07:48.903 "base_bdevs_list": [ 00:07:48.903 { 00:07:48.903 "name": "BaseBdev1", 00:07:48.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.903 "is_configured": false, 00:07:48.903 "data_offset": 0, 00:07:48.903 "data_size": 0 00:07:48.903 }, 00:07:48.903 { 00:07:48.903 "name": "BaseBdev2", 00:07:48.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.903 "is_configured": false, 00:07:48.903 "data_offset": 0, 00:07:48.903 "data_size": 0 00:07:48.903 } 00:07:48.903 ] 00:07:48.903 }' 00:07:48.903 17:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.903 17:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.473 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.473 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.473 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.473 [2024-11-26 17:52:31.093758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.473 [2024-11-26 17:52:31.093797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:49.473 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.473 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.473 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.473 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.473 [2024-11-26 17:52:31.101755] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.473 [2024-11-26 17:52:31.101801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.473 [2024-11-26 17:52:31.101813] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.473 [2024-11-26 17:52:31.101825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.473 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.473 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:49.473 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.473 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.473 [2024-11-26 17:52:31.152012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.473 BaseBdev1 00:07:49.473 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.473 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.474 [ 00:07:49.474 { 00:07:49.474 "name": "BaseBdev1", 00:07:49.474 "aliases": [ 00:07:49.474 "0c8791bb-d01c-4343-a41c-d340652796ff" 00:07:49.474 ], 00:07:49.474 "product_name": "Malloc disk", 00:07:49.474 "block_size": 512, 00:07:49.474 "num_blocks": 65536, 00:07:49.474 "uuid": "0c8791bb-d01c-4343-a41c-d340652796ff", 00:07:49.474 "assigned_rate_limits": { 00:07:49.474 "rw_ios_per_sec": 0, 00:07:49.474 "rw_mbytes_per_sec": 0, 00:07:49.474 "r_mbytes_per_sec": 0, 00:07:49.474 "w_mbytes_per_sec": 0 00:07:49.474 }, 00:07:49.474 "claimed": true, 00:07:49.474 "claim_type": "exclusive_write", 00:07:49.474 "zoned": false, 00:07:49.474 "supported_io_types": { 00:07:49.474 "read": true, 00:07:49.474 "write": true, 00:07:49.474 "unmap": true, 00:07:49.474 "flush": true, 00:07:49.474 "reset": true, 00:07:49.474 "nvme_admin": false, 00:07:49.474 "nvme_io": false, 00:07:49.474 "nvme_io_md": false, 00:07:49.474 "write_zeroes": true, 00:07:49.474 "zcopy": true, 00:07:49.474 "get_zone_info": false, 00:07:49.474 "zone_management": false, 00:07:49.474 "zone_append": false, 00:07:49.474 "compare": false, 00:07:49.474 "compare_and_write": false, 00:07:49.474 "abort": true, 00:07:49.474 "seek_hole": false, 00:07:49.474 "seek_data": false, 00:07:49.474 "copy": true, 00:07:49.474 "nvme_iov_md": false 00:07:49.474 }, 00:07:49.474 "memory_domains": [ 00:07:49.474 { 00:07:49.474 "dma_device_id": "system", 00:07:49.474 "dma_device_type": 1 00:07:49.474 }, 00:07:49.474 { 00:07:49.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.474 "dma_device_type": 2 00:07:49.474 } 00:07:49.474 ], 00:07:49.474 "driver_specific": {} 00:07:49.474 } 00:07:49.474 ] 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.474 "name": "Existed_Raid", 00:07:49.474 "uuid": "4ae722b4-6d75-422e-ba69-5dcda72d1c9a", 00:07:49.474 "strip_size_kb": 64, 00:07:49.474 "state": "configuring", 00:07:49.474 "raid_level": "concat", 00:07:49.474 "superblock": true, 00:07:49.474 "num_base_bdevs": 2, 00:07:49.474 "num_base_bdevs_discovered": 1, 00:07:49.474 "num_base_bdevs_operational": 2, 00:07:49.474 "base_bdevs_list": [ 00:07:49.474 { 00:07:49.474 "name": "BaseBdev1", 00:07:49.474 "uuid": "0c8791bb-d01c-4343-a41c-d340652796ff", 00:07:49.474 "is_configured": true, 00:07:49.474 "data_offset": 2048, 00:07:49.474 "data_size": 63488 00:07:49.474 }, 00:07:49.474 { 00:07:49.474 "name": "BaseBdev2", 00:07:49.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.474 "is_configured": false, 00:07:49.474 "data_offset": 0, 00:07:49.474 "data_size": 0 00:07:49.474 } 00:07:49.474 ] 00:07:49.474 }' 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.474 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.044 [2024-11-26 17:52:31.679193] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.044 [2024-11-26 17:52:31.679257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.044 [2024-11-26 17:52:31.687209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.044 [2024-11-26 17:52:31.689180] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.044 [2024-11-26 17:52:31.689224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.044 "name": "Existed_Raid", 00:07:50.044 "uuid": "2a1053f0-8881-464f-8778-ff5cc7d454fd", 00:07:50.044 "strip_size_kb": 64, 00:07:50.044 "state": "configuring", 00:07:50.044 "raid_level": "concat", 00:07:50.044 "superblock": true, 00:07:50.044 "num_base_bdevs": 2, 00:07:50.044 "num_base_bdevs_discovered": 1, 00:07:50.044 "num_base_bdevs_operational": 2, 00:07:50.044 "base_bdevs_list": [ 00:07:50.044 { 00:07:50.044 "name": "BaseBdev1", 00:07:50.044 "uuid": "0c8791bb-d01c-4343-a41c-d340652796ff", 00:07:50.044 "is_configured": true, 00:07:50.044 "data_offset": 2048, 00:07:50.044 "data_size": 63488 00:07:50.044 }, 00:07:50.044 { 00:07:50.044 "name": "BaseBdev2", 00:07:50.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.044 "is_configured": false, 00:07:50.044 "data_offset": 0, 00:07:50.044 "data_size": 0 00:07:50.044 } 00:07:50.044 ] 00:07:50.044 }' 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.044 17:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.304 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:50.304 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.304 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.564 [2024-11-26 17:52:32.182610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.564 [2024-11-26 17:52:32.182890] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.564 [2024-11-26 17:52:32.182906] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:50.564 [2024-11-26 17:52:32.183178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:50.564 [2024-11-26 17:52:32.183355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.564 [2024-11-26 17:52:32.183376] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:50.564 BaseBdev2 00:07:50.564 [2024-11-26 17:52:32.183518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.564 [ 00:07:50.564 { 00:07:50.564 "name": "BaseBdev2", 00:07:50.564 "aliases": [ 00:07:50.564 "1396e6b9-2d9e-4991-88cc-eb7447b0fc86" 00:07:50.564 ], 00:07:50.564 "product_name": "Malloc disk", 00:07:50.564 "block_size": 512, 00:07:50.564 "num_blocks": 65536, 00:07:50.564 "uuid": "1396e6b9-2d9e-4991-88cc-eb7447b0fc86", 00:07:50.564 "assigned_rate_limits": { 00:07:50.564 "rw_ios_per_sec": 0, 00:07:50.564 "rw_mbytes_per_sec": 0, 00:07:50.564 "r_mbytes_per_sec": 0, 00:07:50.564 "w_mbytes_per_sec": 0 00:07:50.564 }, 00:07:50.564 "claimed": true, 00:07:50.564 "claim_type": "exclusive_write", 00:07:50.564 "zoned": false, 00:07:50.564 "supported_io_types": { 00:07:50.564 "read": true, 00:07:50.564 "write": true, 00:07:50.564 "unmap": true, 00:07:50.564 "flush": true, 00:07:50.564 "reset": true, 00:07:50.564 "nvme_admin": false, 00:07:50.564 "nvme_io": false, 00:07:50.564 "nvme_io_md": false, 00:07:50.564 "write_zeroes": true, 00:07:50.564 "zcopy": true, 00:07:50.564 "get_zone_info": false, 00:07:50.564 "zone_management": false, 00:07:50.564 "zone_append": false, 00:07:50.564 "compare": false, 00:07:50.564 "compare_and_write": false, 00:07:50.564 "abort": true, 00:07:50.564 "seek_hole": false, 00:07:50.564 "seek_data": false, 00:07:50.564 "copy": true, 00:07:50.564 "nvme_iov_md": false 00:07:50.564 }, 00:07:50.564 "memory_domains": [ 00:07:50.564 { 00:07:50.564 "dma_device_id": "system", 00:07:50.564 "dma_device_type": 1 00:07:50.564 }, 00:07:50.564 { 00:07:50.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.564 "dma_device_type": 2 00:07:50.564 } 00:07:50.564 ], 00:07:50.564 "driver_specific": {} 00:07:50.564 } 00:07:50.564 ] 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.564 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.565 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.565 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.565 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.565 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.565 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.565 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.565 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.565 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.565 "name": "Existed_Raid", 00:07:50.565 "uuid": "2a1053f0-8881-464f-8778-ff5cc7d454fd", 00:07:50.565 "strip_size_kb": 64, 00:07:50.565 "state": "online", 00:07:50.565 "raid_level": "concat", 00:07:50.565 "superblock": true, 00:07:50.565 "num_base_bdevs": 2, 00:07:50.565 "num_base_bdevs_discovered": 2, 00:07:50.565 "num_base_bdevs_operational": 2, 00:07:50.565 "base_bdevs_list": [ 00:07:50.565 { 00:07:50.565 "name": "BaseBdev1", 00:07:50.565 "uuid": "0c8791bb-d01c-4343-a41c-d340652796ff", 00:07:50.565 "is_configured": true, 00:07:50.565 "data_offset": 2048, 00:07:50.565 "data_size": 63488 00:07:50.565 }, 00:07:50.565 { 00:07:50.565 "name": "BaseBdev2", 00:07:50.565 "uuid": "1396e6b9-2d9e-4991-88cc-eb7447b0fc86", 00:07:50.565 "is_configured": true, 00:07:50.565 "data_offset": 2048, 00:07:50.565 "data_size": 63488 00:07:50.565 } 00:07:50.565 ] 00:07:50.565 }' 00:07:50.565 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.565 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.826 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:50.826 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:50.826 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:50.826 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:50.826 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:50.826 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:50.826 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:50.826 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:50.826 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.826 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.826 [2024-11-26 17:52:32.670205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.086 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.086 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:51.086 "name": "Existed_Raid", 00:07:51.086 "aliases": [ 00:07:51.086 "2a1053f0-8881-464f-8778-ff5cc7d454fd" 00:07:51.086 ], 00:07:51.086 "product_name": "Raid Volume", 00:07:51.086 "block_size": 512, 00:07:51.086 "num_blocks": 126976, 00:07:51.086 "uuid": "2a1053f0-8881-464f-8778-ff5cc7d454fd", 00:07:51.086 "assigned_rate_limits": { 00:07:51.086 "rw_ios_per_sec": 0, 00:07:51.086 "rw_mbytes_per_sec": 0, 00:07:51.086 "r_mbytes_per_sec": 0, 00:07:51.086 "w_mbytes_per_sec": 0 00:07:51.086 }, 00:07:51.086 "claimed": false, 00:07:51.086 "zoned": false, 00:07:51.086 "supported_io_types": { 00:07:51.086 "read": true, 00:07:51.086 "write": true, 00:07:51.086 "unmap": true, 00:07:51.086 "flush": true, 00:07:51.086 "reset": true, 00:07:51.086 "nvme_admin": false, 00:07:51.086 "nvme_io": false, 00:07:51.086 "nvme_io_md": false, 00:07:51.086 "write_zeroes": true, 00:07:51.086 "zcopy": false, 00:07:51.086 "get_zone_info": false, 00:07:51.086 "zone_management": false, 00:07:51.086 "zone_append": false, 00:07:51.086 "compare": false, 00:07:51.086 "compare_and_write": false, 00:07:51.086 "abort": false, 00:07:51.086 "seek_hole": false, 00:07:51.086 "seek_data": false, 00:07:51.086 "copy": false, 00:07:51.086 "nvme_iov_md": false 00:07:51.086 }, 00:07:51.086 "memory_domains": [ 00:07:51.086 { 00:07:51.086 "dma_device_id": "system", 00:07:51.086 "dma_device_type": 1 00:07:51.086 }, 00:07:51.086 { 00:07:51.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.086 "dma_device_type": 2 00:07:51.086 }, 00:07:51.086 { 00:07:51.086 "dma_device_id": "system", 00:07:51.086 "dma_device_type": 1 00:07:51.086 }, 00:07:51.086 { 00:07:51.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.086 "dma_device_type": 2 00:07:51.086 } 00:07:51.086 ], 00:07:51.086 "driver_specific": { 00:07:51.086 "raid": { 00:07:51.086 "uuid": "2a1053f0-8881-464f-8778-ff5cc7d454fd", 00:07:51.086 "strip_size_kb": 64, 00:07:51.086 "state": "online", 00:07:51.086 "raid_level": "concat", 00:07:51.086 "superblock": true, 00:07:51.086 "num_base_bdevs": 2, 00:07:51.086 "num_base_bdevs_discovered": 2, 00:07:51.086 "num_base_bdevs_operational": 2, 00:07:51.086 "base_bdevs_list": [ 00:07:51.086 { 00:07:51.086 "name": "BaseBdev1", 00:07:51.086 "uuid": "0c8791bb-d01c-4343-a41c-d340652796ff", 00:07:51.086 "is_configured": true, 00:07:51.086 "data_offset": 2048, 00:07:51.086 "data_size": 63488 00:07:51.086 }, 00:07:51.086 { 00:07:51.086 "name": "BaseBdev2", 00:07:51.086 "uuid": "1396e6b9-2d9e-4991-88cc-eb7447b0fc86", 00:07:51.086 "is_configured": true, 00:07:51.086 "data_offset": 2048, 00:07:51.087 "data_size": 63488 00:07:51.087 } 00:07:51.087 ] 00:07:51.087 } 00:07:51.087 } 00:07:51.087 }' 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:51.087 BaseBdev2' 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.087 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.087 [2024-11-26 17:52:32.897503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:51.087 [2024-11-26 17:52:32.897545] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.087 [2024-11-26 17:52:32.897601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.382 17:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.382 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.382 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.382 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.382 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.382 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.382 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.382 "name": "Existed_Raid", 00:07:51.382 "uuid": "2a1053f0-8881-464f-8778-ff5cc7d454fd", 00:07:51.382 "strip_size_kb": 64, 00:07:51.382 "state": "offline", 00:07:51.382 "raid_level": "concat", 00:07:51.382 "superblock": true, 00:07:51.382 "num_base_bdevs": 2, 00:07:51.382 "num_base_bdevs_discovered": 1, 00:07:51.382 "num_base_bdevs_operational": 1, 00:07:51.382 "base_bdevs_list": [ 00:07:51.382 { 00:07:51.382 "name": null, 00:07:51.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.382 "is_configured": false, 00:07:51.382 "data_offset": 0, 00:07:51.382 "data_size": 63488 00:07:51.382 }, 00:07:51.382 { 00:07:51.382 "name": "BaseBdev2", 00:07:51.382 "uuid": "1396e6b9-2d9e-4991-88cc-eb7447b0fc86", 00:07:51.382 "is_configured": true, 00:07:51.382 "data_offset": 2048, 00:07:51.382 "data_size": 63488 00:07:51.382 } 00:07:51.382 ] 00:07:51.382 }' 00:07:51.382 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.382 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.641 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:51.641 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.641 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:51.641 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.641 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.641 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.641 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.641 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:51.641 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:51.641 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:51.641 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.641 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.641 [2024-11-26 17:52:33.448203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:51.641 [2024-11-26 17:52:33.448271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62130 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62130 ']' 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62130 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62130 00:07:51.901 killing process with pid 62130 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62130' 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62130 00:07:51.901 [2024-11-26 17:52:33.632287] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.901 17:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62130 00:07:51.901 [2024-11-26 17:52:33.649713] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.280 17:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:53.280 00:07:53.280 real 0m5.090s 00:07:53.280 user 0m7.302s 00:07:53.280 sys 0m0.835s 00:07:53.280 17:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.280 17:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.280 ************************************ 00:07:53.280 END TEST raid_state_function_test_sb 00:07:53.280 ************************************ 00:07:53.280 17:52:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:53.280 17:52:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:53.280 17:52:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.280 17:52:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.280 ************************************ 00:07:53.280 START TEST raid_superblock_test 00:07:53.280 ************************************ 00:07:53.280 17:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:53.280 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:53.280 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:53.280 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:53.280 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:53.280 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:53.280 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:53.280 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:53.280 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:53.280 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:53.280 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:53.280 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:53.280 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:53.280 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:53.281 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:53.281 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:53.281 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:53.281 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62382 00:07:53.281 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:53.281 17:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62382 00:07:53.281 17:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62382 ']' 00:07:53.281 17:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.281 17:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.281 17:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.281 17:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.281 17:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.281 [2024-11-26 17:52:34.997961] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:07:53.281 [2024-11-26 17:52:34.998088] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62382 ] 00:07:53.540 [2024-11-26 17:52:35.174303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.540 [2024-11-26 17:52:35.304300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.800 [2024-11-26 17:52:35.523366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.800 [2024-11-26 17:52:35.523415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.073 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.073 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:54.073 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:54.073 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.073 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.074 malloc1 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.074 [2024-11-26 17:52:35.912308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:54.074 [2024-11-26 17:52:35.912396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.074 [2024-11-26 17:52:35.912424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:54.074 [2024-11-26 17:52:35.912434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.074 [2024-11-26 17:52:35.914717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.074 [2024-11-26 17:52:35.914754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:54.074 pt1 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.074 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.358 malloc2 00:07:54.358 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.358 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:54.358 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.358 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.358 [2024-11-26 17:52:35.969578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:54.358 [2024-11-26 17:52:35.969661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.358 [2024-11-26 17:52:35.969693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:54.358 [2024-11-26 17:52:35.969703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.358 [2024-11-26 17:52:35.972050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.358 [2024-11-26 17:52:35.972087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:54.358 pt2 00:07:54.358 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.358 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:54.358 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.358 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:54.358 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.358 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.358 [2024-11-26 17:52:35.981649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:54.358 [2024-11-26 17:52:35.983560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:54.358 [2024-11-26 17:52:35.983741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:54.358 [2024-11-26 17:52:35.983756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:54.358 [2024-11-26 17:52:35.984084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:54.358 [2024-11-26 17:52:35.984267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:54.358 [2024-11-26 17:52:35.984286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:54.358 [2024-11-26 17:52:35.984494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.358 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.359 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:54.359 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.359 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.359 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.359 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.359 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.359 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.359 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.359 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.359 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.359 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.359 17:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.359 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.359 17:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.359 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.359 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.359 "name": "raid_bdev1", 00:07:54.359 "uuid": "464f1b10-301e-463c-a406-253b0a716004", 00:07:54.359 "strip_size_kb": 64, 00:07:54.359 "state": "online", 00:07:54.359 "raid_level": "concat", 00:07:54.359 "superblock": true, 00:07:54.359 "num_base_bdevs": 2, 00:07:54.359 "num_base_bdevs_discovered": 2, 00:07:54.359 "num_base_bdevs_operational": 2, 00:07:54.359 "base_bdevs_list": [ 00:07:54.359 { 00:07:54.359 "name": "pt1", 00:07:54.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.359 "is_configured": true, 00:07:54.359 "data_offset": 2048, 00:07:54.359 "data_size": 63488 00:07:54.359 }, 00:07:54.359 { 00:07:54.359 "name": "pt2", 00:07:54.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.359 "is_configured": true, 00:07:54.359 "data_offset": 2048, 00:07:54.359 "data_size": 63488 00:07:54.359 } 00:07:54.359 ] 00:07:54.359 }' 00:07:54.359 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.359 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.617 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:54.617 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:54.617 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.617 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.617 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.617 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.617 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.617 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.617 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.617 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.617 [2024-11-26 17:52:36.433185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.617 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.617 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:54.617 "name": "raid_bdev1", 00:07:54.617 "aliases": [ 00:07:54.617 "464f1b10-301e-463c-a406-253b0a716004" 00:07:54.617 ], 00:07:54.617 "product_name": "Raid Volume", 00:07:54.617 "block_size": 512, 00:07:54.617 "num_blocks": 126976, 00:07:54.617 "uuid": "464f1b10-301e-463c-a406-253b0a716004", 00:07:54.617 "assigned_rate_limits": { 00:07:54.617 "rw_ios_per_sec": 0, 00:07:54.617 "rw_mbytes_per_sec": 0, 00:07:54.617 "r_mbytes_per_sec": 0, 00:07:54.617 "w_mbytes_per_sec": 0 00:07:54.617 }, 00:07:54.617 "claimed": false, 00:07:54.617 "zoned": false, 00:07:54.617 "supported_io_types": { 00:07:54.617 "read": true, 00:07:54.617 "write": true, 00:07:54.617 "unmap": true, 00:07:54.617 "flush": true, 00:07:54.617 "reset": true, 00:07:54.617 "nvme_admin": false, 00:07:54.617 "nvme_io": false, 00:07:54.617 "nvme_io_md": false, 00:07:54.617 "write_zeroes": true, 00:07:54.617 "zcopy": false, 00:07:54.617 "get_zone_info": false, 00:07:54.617 "zone_management": false, 00:07:54.617 "zone_append": false, 00:07:54.617 "compare": false, 00:07:54.617 "compare_and_write": false, 00:07:54.617 "abort": false, 00:07:54.617 "seek_hole": false, 00:07:54.617 "seek_data": false, 00:07:54.617 "copy": false, 00:07:54.617 "nvme_iov_md": false 00:07:54.617 }, 00:07:54.617 "memory_domains": [ 00:07:54.617 { 00:07:54.617 "dma_device_id": "system", 00:07:54.617 "dma_device_type": 1 00:07:54.617 }, 00:07:54.617 { 00:07:54.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.617 "dma_device_type": 2 00:07:54.617 }, 00:07:54.617 { 00:07:54.617 "dma_device_id": "system", 00:07:54.617 "dma_device_type": 1 00:07:54.618 }, 00:07:54.618 { 00:07:54.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.618 "dma_device_type": 2 00:07:54.618 } 00:07:54.618 ], 00:07:54.618 "driver_specific": { 00:07:54.618 "raid": { 00:07:54.618 "uuid": "464f1b10-301e-463c-a406-253b0a716004", 00:07:54.618 "strip_size_kb": 64, 00:07:54.618 "state": "online", 00:07:54.618 "raid_level": "concat", 00:07:54.618 "superblock": true, 00:07:54.618 "num_base_bdevs": 2, 00:07:54.618 "num_base_bdevs_discovered": 2, 00:07:54.618 "num_base_bdevs_operational": 2, 00:07:54.618 "base_bdevs_list": [ 00:07:54.618 { 00:07:54.618 "name": "pt1", 00:07:54.618 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.618 "is_configured": true, 00:07:54.618 "data_offset": 2048, 00:07:54.618 "data_size": 63488 00:07:54.618 }, 00:07:54.618 { 00:07:54.618 "name": "pt2", 00:07:54.618 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.618 "is_configured": true, 00:07:54.618 "data_offset": 2048, 00:07:54.618 "data_size": 63488 00:07:54.618 } 00:07:54.618 ] 00:07:54.618 } 00:07:54.618 } 00:07:54.618 }' 00:07:54.618 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:54.877 pt2' 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 [2024-11-26 17:52:36.656779] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=464f1b10-301e-463c-a406-253b0a716004 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 464f1b10-301e-463c-a406-253b0a716004 ']' 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 [2024-11-26 17:52:36.692348] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.877 [2024-11-26 17:52:36.692384] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.877 [2024-11-26 17:52:36.692488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.877 [2024-11-26 17:52:36.692552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.877 [2024-11-26 17:52:36.692567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.877 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.878 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.878 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:54.878 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:54.878 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:54.878 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:54.878 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.878 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.137 [2024-11-26 17:52:36.808249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:55.137 [2024-11-26 17:52:36.810420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:55.137 [2024-11-26 17:52:36.810500] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:55.137 [2024-11-26 17:52:36.810560] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:55.137 [2024-11-26 17:52:36.810577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.137 [2024-11-26 17:52:36.810588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:55.137 request: 00:07:55.137 { 00:07:55.137 "name": "raid_bdev1", 00:07:55.137 "raid_level": "concat", 00:07:55.137 "base_bdevs": [ 00:07:55.137 "malloc1", 00:07:55.137 "malloc2" 00:07:55.137 ], 00:07:55.137 "strip_size_kb": 64, 00:07:55.137 "superblock": false, 00:07:55.137 "method": "bdev_raid_create", 00:07:55.137 "req_id": 1 00:07:55.137 } 00:07:55.137 Got JSON-RPC error response 00:07:55.137 response: 00:07:55.137 { 00:07:55.137 "code": -17, 00:07:55.137 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:55.137 } 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.137 [2024-11-26 17:52:36.856129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:55.137 [2024-11-26 17:52:36.856188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.137 [2024-11-26 17:52:36.856205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:55.137 [2024-11-26 17:52:36.856216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.137 [2024-11-26 17:52:36.858645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.137 [2024-11-26 17:52:36.858681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:55.137 [2024-11-26 17:52:36.858768] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:55.137 [2024-11-26 17:52:36.858830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:55.137 pt1 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.137 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.137 "name": "raid_bdev1", 00:07:55.137 "uuid": "464f1b10-301e-463c-a406-253b0a716004", 00:07:55.137 "strip_size_kb": 64, 00:07:55.137 "state": "configuring", 00:07:55.137 "raid_level": "concat", 00:07:55.137 "superblock": true, 00:07:55.137 "num_base_bdevs": 2, 00:07:55.138 "num_base_bdevs_discovered": 1, 00:07:55.138 "num_base_bdevs_operational": 2, 00:07:55.138 "base_bdevs_list": [ 00:07:55.138 { 00:07:55.138 "name": "pt1", 00:07:55.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.138 "is_configured": true, 00:07:55.138 "data_offset": 2048, 00:07:55.138 "data_size": 63488 00:07:55.138 }, 00:07:55.138 { 00:07:55.138 "name": null, 00:07:55.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.138 "is_configured": false, 00:07:55.138 "data_offset": 2048, 00:07:55.138 "data_size": 63488 00:07:55.138 } 00:07:55.138 ] 00:07:55.138 }' 00:07:55.138 17:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.138 17:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.706 [2024-11-26 17:52:37.295399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:55.706 [2024-11-26 17:52:37.295494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.706 [2024-11-26 17:52:37.295520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:55.706 [2024-11-26 17:52:37.295532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.706 [2024-11-26 17:52:37.296053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.706 [2024-11-26 17:52:37.296076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:55.706 [2024-11-26 17:52:37.296175] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:55.706 [2024-11-26 17:52:37.296204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:55.706 [2024-11-26 17:52:37.296335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:55.706 [2024-11-26 17:52:37.296348] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:55.706 [2024-11-26 17:52:37.296606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:55.706 [2024-11-26 17:52:37.296749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:55.706 [2024-11-26 17:52:37.296767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:55.706 [2024-11-26 17:52:37.296930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.706 pt2 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.706 "name": "raid_bdev1", 00:07:55.706 "uuid": "464f1b10-301e-463c-a406-253b0a716004", 00:07:55.706 "strip_size_kb": 64, 00:07:55.706 "state": "online", 00:07:55.706 "raid_level": "concat", 00:07:55.706 "superblock": true, 00:07:55.706 "num_base_bdevs": 2, 00:07:55.706 "num_base_bdevs_discovered": 2, 00:07:55.706 "num_base_bdevs_operational": 2, 00:07:55.706 "base_bdevs_list": [ 00:07:55.706 { 00:07:55.706 "name": "pt1", 00:07:55.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.706 "is_configured": true, 00:07:55.706 "data_offset": 2048, 00:07:55.706 "data_size": 63488 00:07:55.706 }, 00:07:55.706 { 00:07:55.706 "name": "pt2", 00:07:55.706 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.706 "is_configured": true, 00:07:55.706 "data_offset": 2048, 00:07:55.706 "data_size": 63488 00:07:55.706 } 00:07:55.706 ] 00:07:55.706 }' 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.706 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.965 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:55.965 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:55.965 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:55.965 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:55.965 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:55.965 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:55.965 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.965 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:55.965 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.965 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.965 [2024-11-26 17:52:37.742983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.965 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.965 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:55.965 "name": "raid_bdev1", 00:07:55.965 "aliases": [ 00:07:55.965 "464f1b10-301e-463c-a406-253b0a716004" 00:07:55.965 ], 00:07:55.965 "product_name": "Raid Volume", 00:07:55.965 "block_size": 512, 00:07:55.965 "num_blocks": 126976, 00:07:55.965 "uuid": "464f1b10-301e-463c-a406-253b0a716004", 00:07:55.965 "assigned_rate_limits": { 00:07:55.965 "rw_ios_per_sec": 0, 00:07:55.965 "rw_mbytes_per_sec": 0, 00:07:55.965 "r_mbytes_per_sec": 0, 00:07:55.965 "w_mbytes_per_sec": 0 00:07:55.965 }, 00:07:55.965 "claimed": false, 00:07:55.965 "zoned": false, 00:07:55.965 "supported_io_types": { 00:07:55.965 "read": true, 00:07:55.965 "write": true, 00:07:55.965 "unmap": true, 00:07:55.965 "flush": true, 00:07:55.965 "reset": true, 00:07:55.965 "nvme_admin": false, 00:07:55.965 "nvme_io": false, 00:07:55.965 "nvme_io_md": false, 00:07:55.965 "write_zeroes": true, 00:07:55.965 "zcopy": false, 00:07:55.965 "get_zone_info": false, 00:07:55.965 "zone_management": false, 00:07:55.965 "zone_append": false, 00:07:55.965 "compare": false, 00:07:55.965 "compare_and_write": false, 00:07:55.965 "abort": false, 00:07:55.965 "seek_hole": false, 00:07:55.965 "seek_data": false, 00:07:55.965 "copy": false, 00:07:55.965 "nvme_iov_md": false 00:07:55.965 }, 00:07:55.965 "memory_domains": [ 00:07:55.965 { 00:07:55.965 "dma_device_id": "system", 00:07:55.965 "dma_device_type": 1 00:07:55.965 }, 00:07:55.965 { 00:07:55.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.965 "dma_device_type": 2 00:07:55.965 }, 00:07:55.965 { 00:07:55.965 "dma_device_id": "system", 00:07:55.965 "dma_device_type": 1 00:07:55.965 }, 00:07:55.965 { 00:07:55.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.965 "dma_device_type": 2 00:07:55.965 } 00:07:55.965 ], 00:07:55.965 "driver_specific": { 00:07:55.965 "raid": { 00:07:55.965 "uuid": "464f1b10-301e-463c-a406-253b0a716004", 00:07:55.965 "strip_size_kb": 64, 00:07:55.965 "state": "online", 00:07:55.965 "raid_level": "concat", 00:07:55.965 "superblock": true, 00:07:55.965 "num_base_bdevs": 2, 00:07:55.965 "num_base_bdevs_discovered": 2, 00:07:55.965 "num_base_bdevs_operational": 2, 00:07:55.965 "base_bdevs_list": [ 00:07:55.965 { 00:07:55.965 "name": "pt1", 00:07:55.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.965 "is_configured": true, 00:07:55.965 "data_offset": 2048, 00:07:55.965 "data_size": 63488 00:07:55.965 }, 00:07:55.966 { 00:07:55.966 "name": "pt2", 00:07:55.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.966 "is_configured": true, 00:07:55.966 "data_offset": 2048, 00:07:55.966 "data_size": 63488 00:07:55.966 } 00:07:55.966 ] 00:07:55.966 } 00:07:55.966 } 00:07:55.966 }' 00:07:55.966 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.966 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:55.966 pt2' 00:07:55.966 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.226 [2024-11-26 17:52:37.974564] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.226 17:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.226 17:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 464f1b10-301e-463c-a406-253b0a716004 '!=' 464f1b10-301e-463c-a406-253b0a716004 ']' 00:07:56.226 17:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:56.226 17:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.226 17:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:56.226 17:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62382 00:07:56.226 17:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62382 ']' 00:07:56.226 17:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62382 00:07:56.226 17:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:56.226 17:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.226 17:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62382 00:07:56.226 17:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.226 17:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.226 killing process with pid 62382 00:07:56.226 17:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62382' 00:07:56.226 17:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62382 00:07:56.226 [2024-11-26 17:52:38.057229] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.226 [2024-11-26 17:52:38.057340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.226 [2024-11-26 17:52:38.057396] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.226 17:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62382 00:07:56.226 [2024-11-26 17:52:38.057409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:56.486 [2024-11-26 17:52:38.273489] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.866 17:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:57.866 00:07:57.866 real 0m4.523s 00:07:57.866 user 0m6.347s 00:07:57.866 sys 0m0.725s 00:07:57.866 17:52:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.866 17:52:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.866 ************************************ 00:07:57.866 END TEST raid_superblock_test 00:07:57.866 ************************************ 00:07:57.866 17:52:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:57.866 17:52:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:57.866 17:52:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.866 17:52:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.866 ************************************ 00:07:57.866 START TEST raid_read_error_test 00:07:57.866 ************************************ 00:07:57.866 17:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:57.866 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:57.866 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:57.866 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:57.866 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:57.866 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.866 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jiZBRWCwFK 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62594 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62594 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62594 ']' 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.867 17:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.867 [2024-11-26 17:52:39.606699] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:07:57.867 [2024-11-26 17:52:39.606814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62594 ] 00:07:58.135 [2024-11-26 17:52:39.781815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.135 [2024-11-26 17:52:39.910671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.394 [2024-11-26 17:52:40.119446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.394 [2024-11-26 17:52:40.119516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.653 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.653 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:58.653 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.653 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:58.653 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.653 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.913 BaseBdev1_malloc 00:07:58.913 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.913 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:58.913 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.913 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.913 true 00:07:58.913 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.913 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:58.913 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.913 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.913 [2024-11-26 17:52:40.535490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:58.914 [2024-11-26 17:52:40.535546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.914 [2024-11-26 17:52:40.535567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:58.914 [2024-11-26 17:52:40.535578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.914 [2024-11-26 17:52:40.537807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.914 [2024-11-26 17:52:40.537848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:58.914 BaseBdev1 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.914 BaseBdev2_malloc 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.914 true 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.914 [2024-11-26 17:52:40.603861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:58.914 [2024-11-26 17:52:40.603918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.914 [2024-11-26 17:52:40.603938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:58.914 [2024-11-26 17:52:40.603949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.914 [2024-11-26 17:52:40.606231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.914 [2024-11-26 17:52:40.606265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:58.914 BaseBdev2 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.914 [2024-11-26 17:52:40.615897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.914 [2024-11-26 17:52:40.617785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.914 [2024-11-26 17:52:40.617996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.914 [2024-11-26 17:52:40.618013] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:58.914 [2024-11-26 17:52:40.618275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:58.914 [2024-11-26 17:52:40.618458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.914 [2024-11-26 17:52:40.618478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:58.914 [2024-11-26 17:52:40.618642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.914 "name": "raid_bdev1", 00:07:58.914 "uuid": "832eb93e-5b54-4794-ace6-87a375b9a4dd", 00:07:58.914 "strip_size_kb": 64, 00:07:58.914 "state": "online", 00:07:58.914 "raid_level": "concat", 00:07:58.914 "superblock": true, 00:07:58.914 "num_base_bdevs": 2, 00:07:58.914 "num_base_bdevs_discovered": 2, 00:07:58.914 "num_base_bdevs_operational": 2, 00:07:58.914 "base_bdevs_list": [ 00:07:58.914 { 00:07:58.914 "name": "BaseBdev1", 00:07:58.914 "uuid": "00897c68-1a5b-5a2d-820d-8204a1fd3aa4", 00:07:58.914 "is_configured": true, 00:07:58.914 "data_offset": 2048, 00:07:58.914 "data_size": 63488 00:07:58.914 }, 00:07:58.914 { 00:07:58.914 "name": "BaseBdev2", 00:07:58.914 "uuid": "f6dc5769-f7ec-5f1a-8ca7-db2d92a24db4", 00:07:58.914 "is_configured": true, 00:07:58.914 "data_offset": 2048, 00:07:58.914 "data_size": 63488 00:07:58.914 } 00:07:58.914 ] 00:07:58.914 }' 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.914 17:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.483 17:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:59.483 17:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:59.483 [2024-11-26 17:52:41.176247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.421 "name": "raid_bdev1", 00:08:00.421 "uuid": "832eb93e-5b54-4794-ace6-87a375b9a4dd", 00:08:00.421 "strip_size_kb": 64, 00:08:00.421 "state": "online", 00:08:00.421 "raid_level": "concat", 00:08:00.421 "superblock": true, 00:08:00.421 "num_base_bdevs": 2, 00:08:00.421 "num_base_bdevs_discovered": 2, 00:08:00.421 "num_base_bdevs_operational": 2, 00:08:00.421 "base_bdevs_list": [ 00:08:00.421 { 00:08:00.421 "name": "BaseBdev1", 00:08:00.421 "uuid": "00897c68-1a5b-5a2d-820d-8204a1fd3aa4", 00:08:00.421 "is_configured": true, 00:08:00.421 "data_offset": 2048, 00:08:00.421 "data_size": 63488 00:08:00.421 }, 00:08:00.421 { 00:08:00.421 "name": "BaseBdev2", 00:08:00.421 "uuid": "f6dc5769-f7ec-5f1a-8ca7-db2d92a24db4", 00:08:00.421 "is_configured": true, 00:08:00.421 "data_offset": 2048, 00:08:00.421 "data_size": 63488 00:08:00.421 } 00:08:00.421 ] 00:08:00.421 }' 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.421 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.681 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:00.681 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.681 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.940 [2024-11-26 17:52:42.548229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:00.940 [2024-11-26 17:52:42.548266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.940 [2024-11-26 17:52:42.551213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.940 [2024-11-26 17:52:42.551260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.940 [2024-11-26 17:52:42.551292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.940 [2024-11-26 17:52:42.551304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:00.940 { 00:08:00.940 "results": [ 00:08:00.940 { 00:08:00.940 "job": "raid_bdev1", 00:08:00.940 "core_mask": "0x1", 00:08:00.940 "workload": "randrw", 00:08:00.940 "percentage": 50, 00:08:00.940 "status": "finished", 00:08:00.940 "queue_depth": 1, 00:08:00.940 "io_size": 131072, 00:08:00.940 "runtime": 1.372815, 00:08:00.940 "iops": 14700.451262551764, 00:08:00.940 "mibps": 1837.5564078189705, 00:08:00.940 "io_failed": 1, 00:08:00.940 "io_timeout": 0, 00:08:00.940 "avg_latency_us": 93.89225644884823, 00:08:00.940 "min_latency_us": 26.270742358078603, 00:08:00.940 "max_latency_us": 1495.3082969432314 00:08:00.940 } 00:08:00.940 ], 00:08:00.940 "core_count": 1 00:08:00.940 } 00:08:00.940 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.940 17:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62594 00:08:00.940 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62594 ']' 00:08:00.940 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62594 00:08:00.940 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:00.940 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.941 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62594 00:08:00.941 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.941 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.941 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62594' 00:08:00.941 killing process with pid 62594 00:08:00.941 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62594 00:08:00.941 [2024-11-26 17:52:42.599251] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.941 17:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62594 00:08:00.941 [2024-11-26 17:52:42.741184] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.320 17:52:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:02.320 17:52:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jiZBRWCwFK 00:08:02.320 17:52:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:02.320 17:52:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:02.320 17:52:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:02.320 17:52:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.320 17:52:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:02.320 17:52:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:02.320 00:08:02.320 real 0m4.497s 00:08:02.320 user 0m5.413s 00:08:02.320 sys 0m0.537s 00:08:02.320 17:52:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.320 17:52:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.320 ************************************ 00:08:02.320 END TEST raid_read_error_test 00:08:02.320 ************************************ 00:08:02.320 17:52:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:02.320 17:52:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:02.320 17:52:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.320 17:52:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.320 ************************************ 00:08:02.320 START TEST raid_write_error_test 00:08:02.320 ************************************ 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.d1IHDA65aH 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62734 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62734 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62734 ']' 00:08:02.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.320 17:52:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.320 [2024-11-26 17:52:44.180463] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:08:02.321 [2024-11-26 17:52:44.180614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62734 ] 00:08:02.582 [2024-11-26 17:52:44.350296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.842 [2024-11-26 17:52:44.476612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.842 [2024-11-26 17:52:44.690413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.842 [2024-11-26 17:52:44.690475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.412 BaseBdev1_malloc 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.412 true 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.412 [2024-11-26 17:52:45.125400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:03.412 [2024-11-26 17:52:45.125539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.412 [2024-11-26 17:52:45.125600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:03.412 [2024-11-26 17:52:45.125667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.412 [2024-11-26 17:52:45.127959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.412 [2024-11-26 17:52:45.128061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:03.412 BaseBdev1 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.412 BaseBdev2_malloc 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.412 true 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.412 [2024-11-26 17:52:45.194161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:03.412 [2024-11-26 17:52:45.194286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.412 [2024-11-26 17:52:45.194327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:03.412 [2024-11-26 17:52:45.194373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.412 [2024-11-26 17:52:45.196740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.412 [2024-11-26 17:52:45.196839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:03.412 BaseBdev2 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.412 [2024-11-26 17:52:45.206201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.412 [2024-11-26 17:52:45.208071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:03.412 [2024-11-26 17:52:45.208260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:03.412 [2024-11-26 17:52:45.208277] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:03.412 [2024-11-26 17:52:45.208572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:03.412 [2024-11-26 17:52:45.208758] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:03.412 [2024-11-26 17:52:45.208771] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:03.412 [2024-11-26 17:52:45.208978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.412 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.413 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.413 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.413 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.413 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.413 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.413 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.413 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.413 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.413 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.413 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.413 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.413 "name": "raid_bdev1", 00:08:03.413 "uuid": "60a43a71-3f50-4aca-a4d1-4573dd78e65e", 00:08:03.413 "strip_size_kb": 64, 00:08:03.413 "state": "online", 00:08:03.413 "raid_level": "concat", 00:08:03.413 "superblock": true, 00:08:03.413 "num_base_bdevs": 2, 00:08:03.413 "num_base_bdevs_discovered": 2, 00:08:03.413 "num_base_bdevs_operational": 2, 00:08:03.413 "base_bdevs_list": [ 00:08:03.413 { 00:08:03.413 "name": "BaseBdev1", 00:08:03.413 "uuid": "b6086cb3-9627-55a5-9cc4-f7d8f04707ef", 00:08:03.413 "is_configured": true, 00:08:03.413 "data_offset": 2048, 00:08:03.413 "data_size": 63488 00:08:03.413 }, 00:08:03.413 { 00:08:03.413 "name": "BaseBdev2", 00:08:03.413 "uuid": "6c40085e-8d8d-5127-9bd9-b480301b45ca", 00:08:03.413 "is_configured": true, 00:08:03.413 "data_offset": 2048, 00:08:03.413 "data_size": 63488 00:08:03.413 } 00:08:03.413 ] 00:08:03.413 }' 00:08:03.413 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.413 17:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.982 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:03.982 17:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:03.982 [2024-11-26 17:52:45.730769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.921 "name": "raid_bdev1", 00:08:04.921 "uuid": "60a43a71-3f50-4aca-a4d1-4573dd78e65e", 00:08:04.921 "strip_size_kb": 64, 00:08:04.921 "state": "online", 00:08:04.921 "raid_level": "concat", 00:08:04.921 "superblock": true, 00:08:04.921 "num_base_bdevs": 2, 00:08:04.921 "num_base_bdevs_discovered": 2, 00:08:04.921 "num_base_bdevs_operational": 2, 00:08:04.921 "base_bdevs_list": [ 00:08:04.921 { 00:08:04.921 "name": "BaseBdev1", 00:08:04.921 "uuid": "b6086cb3-9627-55a5-9cc4-f7d8f04707ef", 00:08:04.921 "is_configured": true, 00:08:04.921 "data_offset": 2048, 00:08:04.921 "data_size": 63488 00:08:04.921 }, 00:08:04.921 { 00:08:04.921 "name": "BaseBdev2", 00:08:04.921 "uuid": "6c40085e-8d8d-5127-9bd9-b480301b45ca", 00:08:04.921 "is_configured": true, 00:08:04.921 "data_offset": 2048, 00:08:04.921 "data_size": 63488 00:08:04.921 } 00:08:04.921 ] 00:08:04.921 }' 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.921 17:52:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.490 17:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:05.490 17:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.490 17:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.490 [2024-11-26 17:52:47.127642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:05.490 [2024-11-26 17:52:47.127754] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.490 [2024-11-26 17:52:47.130810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.490 [2024-11-26 17:52:47.130899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.490 [2024-11-26 17:52:47.130980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.490 [2024-11-26 17:52:47.131047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:05.490 { 00:08:05.490 "results": [ 00:08:05.490 { 00:08:05.490 "job": "raid_bdev1", 00:08:05.490 "core_mask": "0x1", 00:08:05.490 "workload": "randrw", 00:08:05.490 "percentage": 50, 00:08:05.490 "status": "finished", 00:08:05.490 "queue_depth": 1, 00:08:05.490 "io_size": 131072, 00:08:05.490 "runtime": 1.397723, 00:08:05.490 "iops": 14454.938496397355, 00:08:05.490 "mibps": 1806.8673120496694, 00:08:05.490 "io_failed": 1, 00:08:05.490 "io_timeout": 0, 00:08:05.490 "avg_latency_us": 95.52651470894943, 00:08:05.490 "min_latency_us": 28.28296943231441, 00:08:05.490 "max_latency_us": 1624.0908296943232 00:08:05.490 } 00:08:05.490 ], 00:08:05.490 "core_count": 1 00:08:05.490 } 00:08:05.490 17:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.490 17:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62734 00:08:05.490 17:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62734 ']' 00:08:05.490 17:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62734 00:08:05.490 17:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:05.490 17:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.490 17:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62734 00:08:05.490 killing process with pid 62734 00:08:05.490 17:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.490 17:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.490 17:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62734' 00:08:05.490 17:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62734 00:08:05.490 [2024-11-26 17:52:47.169452] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.490 17:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62734 00:08:05.490 [2024-11-26 17:52:47.316104] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.869 17:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.d1IHDA65aH 00:08:06.869 17:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:06.869 17:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:06.869 17:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:06.869 17:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:06.869 17:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.869 17:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:06.869 ************************************ 00:08:06.869 END TEST raid_write_error_test 00:08:06.869 ************************************ 00:08:06.869 17:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:06.869 00:08:06.869 real 0m4.554s 00:08:06.869 user 0m5.479s 00:08:06.869 sys 0m0.536s 00:08:06.869 17:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.869 17:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.869 17:52:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:06.869 17:52:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:06.869 17:52:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:06.869 17:52:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.869 17:52:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.869 ************************************ 00:08:06.869 START TEST raid_state_function_test 00:08:06.869 ************************************ 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62883 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62883' 00:08:06.869 Process raid pid: 62883 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62883 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62883 ']' 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.869 17:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.128 [2024-11-26 17:52:48.801349] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:08:07.128 [2024-11-26 17:52:48.801586] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.128 [2024-11-26 17:52:48.961497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.389 [2024-11-26 17:52:49.092011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.648 [2024-11-26 17:52:49.320186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.649 [2024-11-26 17:52:49.320327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.908 [2024-11-26 17:52:49.689088] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.908 [2024-11-26 17:52:49.689148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.908 [2024-11-26 17:52:49.689160] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.908 [2024-11-26 17:52:49.689171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.908 "name": "Existed_Raid", 00:08:07.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.908 "strip_size_kb": 0, 00:08:07.908 "state": "configuring", 00:08:07.908 "raid_level": "raid1", 00:08:07.908 "superblock": false, 00:08:07.908 "num_base_bdevs": 2, 00:08:07.908 "num_base_bdevs_discovered": 0, 00:08:07.908 "num_base_bdevs_operational": 2, 00:08:07.908 "base_bdevs_list": [ 00:08:07.908 { 00:08:07.908 "name": "BaseBdev1", 00:08:07.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.908 "is_configured": false, 00:08:07.908 "data_offset": 0, 00:08:07.908 "data_size": 0 00:08:07.908 }, 00:08:07.908 { 00:08:07.908 "name": "BaseBdev2", 00:08:07.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.908 "is_configured": false, 00:08:07.908 "data_offset": 0, 00:08:07.908 "data_size": 0 00:08:07.908 } 00:08:07.908 ] 00:08:07.908 }' 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.908 17:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.530 [2024-11-26 17:52:50.160282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.530 [2024-11-26 17:52:50.160393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.530 [2024-11-26 17:52:50.168236] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.530 [2024-11-26 17:52:50.168333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.530 [2024-11-26 17:52:50.168385] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.530 [2024-11-26 17:52:50.168423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.530 [2024-11-26 17:52:50.217498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.530 BaseBdev1 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.530 [ 00:08:08.530 { 00:08:08.530 "name": "BaseBdev1", 00:08:08.530 "aliases": [ 00:08:08.530 "566aa2e1-1e93-40bd-86a4-a0a1bc7fba22" 00:08:08.530 ], 00:08:08.530 "product_name": "Malloc disk", 00:08:08.530 "block_size": 512, 00:08:08.530 "num_blocks": 65536, 00:08:08.530 "uuid": "566aa2e1-1e93-40bd-86a4-a0a1bc7fba22", 00:08:08.530 "assigned_rate_limits": { 00:08:08.530 "rw_ios_per_sec": 0, 00:08:08.530 "rw_mbytes_per_sec": 0, 00:08:08.530 "r_mbytes_per_sec": 0, 00:08:08.530 "w_mbytes_per_sec": 0 00:08:08.530 }, 00:08:08.530 "claimed": true, 00:08:08.530 "claim_type": "exclusive_write", 00:08:08.530 "zoned": false, 00:08:08.530 "supported_io_types": { 00:08:08.530 "read": true, 00:08:08.530 "write": true, 00:08:08.530 "unmap": true, 00:08:08.530 "flush": true, 00:08:08.530 "reset": true, 00:08:08.530 "nvme_admin": false, 00:08:08.530 "nvme_io": false, 00:08:08.530 "nvme_io_md": false, 00:08:08.530 "write_zeroes": true, 00:08:08.530 "zcopy": true, 00:08:08.530 "get_zone_info": false, 00:08:08.530 "zone_management": false, 00:08:08.530 "zone_append": false, 00:08:08.530 "compare": false, 00:08:08.530 "compare_and_write": false, 00:08:08.530 "abort": true, 00:08:08.530 "seek_hole": false, 00:08:08.530 "seek_data": false, 00:08:08.530 "copy": true, 00:08:08.530 "nvme_iov_md": false 00:08:08.530 }, 00:08:08.530 "memory_domains": [ 00:08:08.530 { 00:08:08.530 "dma_device_id": "system", 00:08:08.530 "dma_device_type": 1 00:08:08.530 }, 00:08:08.530 { 00:08:08.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.530 "dma_device_type": 2 00:08:08.530 } 00:08:08.530 ], 00:08:08.530 "driver_specific": {} 00:08:08.530 } 00:08:08.530 ] 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.530 "name": "Existed_Raid", 00:08:08.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.530 "strip_size_kb": 0, 00:08:08.530 "state": "configuring", 00:08:08.530 "raid_level": "raid1", 00:08:08.530 "superblock": false, 00:08:08.530 "num_base_bdevs": 2, 00:08:08.530 "num_base_bdevs_discovered": 1, 00:08:08.530 "num_base_bdevs_operational": 2, 00:08:08.530 "base_bdevs_list": [ 00:08:08.530 { 00:08:08.530 "name": "BaseBdev1", 00:08:08.530 "uuid": "566aa2e1-1e93-40bd-86a4-a0a1bc7fba22", 00:08:08.530 "is_configured": true, 00:08:08.530 "data_offset": 0, 00:08:08.530 "data_size": 65536 00:08:08.530 }, 00:08:08.530 { 00:08:08.530 "name": "BaseBdev2", 00:08:08.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.530 "is_configured": false, 00:08:08.530 "data_offset": 0, 00:08:08.530 "data_size": 0 00:08:08.530 } 00:08:08.530 ] 00:08:08.530 }' 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.530 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.096 [2024-11-26 17:52:50.704798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.096 [2024-11-26 17:52:50.704950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.096 [2024-11-26 17:52:50.716828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.096 [2024-11-26 17:52:50.718968] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.096 [2024-11-26 17:52:50.719079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.096 "name": "Existed_Raid", 00:08:09.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.096 "strip_size_kb": 0, 00:08:09.096 "state": "configuring", 00:08:09.096 "raid_level": "raid1", 00:08:09.096 "superblock": false, 00:08:09.096 "num_base_bdevs": 2, 00:08:09.096 "num_base_bdevs_discovered": 1, 00:08:09.096 "num_base_bdevs_operational": 2, 00:08:09.096 "base_bdevs_list": [ 00:08:09.096 { 00:08:09.096 "name": "BaseBdev1", 00:08:09.096 "uuid": "566aa2e1-1e93-40bd-86a4-a0a1bc7fba22", 00:08:09.096 "is_configured": true, 00:08:09.096 "data_offset": 0, 00:08:09.096 "data_size": 65536 00:08:09.096 }, 00:08:09.096 { 00:08:09.096 "name": "BaseBdev2", 00:08:09.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.096 "is_configured": false, 00:08:09.096 "data_offset": 0, 00:08:09.096 "data_size": 0 00:08:09.096 } 00:08:09.096 ] 00:08:09.096 }' 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.096 17:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.355 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.355 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.355 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.615 [2024-11-26 17:52:51.231032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.615 [2024-11-26 17:52:51.231202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:09.615 [2024-11-26 17:52:51.231217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:09.615 [2024-11-26 17:52:51.231548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:09.615 [2024-11-26 17:52:51.231764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:09.615 [2024-11-26 17:52:51.231779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:09.615 [2024-11-26 17:52:51.232175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.615 BaseBdev2 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.615 [ 00:08:09.615 { 00:08:09.615 "name": "BaseBdev2", 00:08:09.615 "aliases": [ 00:08:09.615 "8a12f3a6-036c-4611-ba40-613dc97cc628" 00:08:09.615 ], 00:08:09.615 "product_name": "Malloc disk", 00:08:09.615 "block_size": 512, 00:08:09.615 "num_blocks": 65536, 00:08:09.615 "uuid": "8a12f3a6-036c-4611-ba40-613dc97cc628", 00:08:09.615 "assigned_rate_limits": { 00:08:09.615 "rw_ios_per_sec": 0, 00:08:09.615 "rw_mbytes_per_sec": 0, 00:08:09.615 "r_mbytes_per_sec": 0, 00:08:09.615 "w_mbytes_per_sec": 0 00:08:09.615 }, 00:08:09.615 "claimed": true, 00:08:09.615 "claim_type": "exclusive_write", 00:08:09.615 "zoned": false, 00:08:09.615 "supported_io_types": { 00:08:09.615 "read": true, 00:08:09.615 "write": true, 00:08:09.615 "unmap": true, 00:08:09.615 "flush": true, 00:08:09.615 "reset": true, 00:08:09.615 "nvme_admin": false, 00:08:09.615 "nvme_io": false, 00:08:09.615 "nvme_io_md": false, 00:08:09.615 "write_zeroes": true, 00:08:09.615 "zcopy": true, 00:08:09.615 "get_zone_info": false, 00:08:09.615 "zone_management": false, 00:08:09.615 "zone_append": false, 00:08:09.615 "compare": false, 00:08:09.615 "compare_and_write": false, 00:08:09.615 "abort": true, 00:08:09.615 "seek_hole": false, 00:08:09.615 "seek_data": false, 00:08:09.615 "copy": true, 00:08:09.615 "nvme_iov_md": false 00:08:09.615 }, 00:08:09.615 "memory_domains": [ 00:08:09.615 { 00:08:09.615 "dma_device_id": "system", 00:08:09.615 "dma_device_type": 1 00:08:09.615 }, 00:08:09.615 { 00:08:09.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.615 "dma_device_type": 2 00:08:09.615 } 00:08:09.615 ], 00:08:09.615 "driver_specific": {} 00:08:09.615 } 00:08:09.615 ] 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.615 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.615 "name": "Existed_Raid", 00:08:09.615 "uuid": "4210b899-8cf4-45e5-96ae-1a8c9838bc39", 00:08:09.615 "strip_size_kb": 0, 00:08:09.615 "state": "online", 00:08:09.615 "raid_level": "raid1", 00:08:09.615 "superblock": false, 00:08:09.615 "num_base_bdevs": 2, 00:08:09.615 "num_base_bdevs_discovered": 2, 00:08:09.616 "num_base_bdevs_operational": 2, 00:08:09.616 "base_bdevs_list": [ 00:08:09.616 { 00:08:09.616 "name": "BaseBdev1", 00:08:09.616 "uuid": "566aa2e1-1e93-40bd-86a4-a0a1bc7fba22", 00:08:09.616 "is_configured": true, 00:08:09.616 "data_offset": 0, 00:08:09.616 "data_size": 65536 00:08:09.616 }, 00:08:09.616 { 00:08:09.616 "name": "BaseBdev2", 00:08:09.616 "uuid": "8a12f3a6-036c-4611-ba40-613dc97cc628", 00:08:09.616 "is_configured": true, 00:08:09.616 "data_offset": 0, 00:08:09.616 "data_size": 65536 00:08:09.616 } 00:08:09.616 ] 00:08:09.616 }' 00:08:09.616 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.616 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.875 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:09.875 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:09.875 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.875 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.875 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.876 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.876 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:09.876 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.876 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.876 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.876 [2024-11-26 17:52:51.694651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.876 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:10.136 "name": "Existed_Raid", 00:08:10.136 "aliases": [ 00:08:10.136 "4210b899-8cf4-45e5-96ae-1a8c9838bc39" 00:08:10.136 ], 00:08:10.136 "product_name": "Raid Volume", 00:08:10.136 "block_size": 512, 00:08:10.136 "num_blocks": 65536, 00:08:10.136 "uuid": "4210b899-8cf4-45e5-96ae-1a8c9838bc39", 00:08:10.136 "assigned_rate_limits": { 00:08:10.136 "rw_ios_per_sec": 0, 00:08:10.136 "rw_mbytes_per_sec": 0, 00:08:10.136 "r_mbytes_per_sec": 0, 00:08:10.136 "w_mbytes_per_sec": 0 00:08:10.136 }, 00:08:10.136 "claimed": false, 00:08:10.136 "zoned": false, 00:08:10.136 "supported_io_types": { 00:08:10.136 "read": true, 00:08:10.136 "write": true, 00:08:10.136 "unmap": false, 00:08:10.136 "flush": false, 00:08:10.136 "reset": true, 00:08:10.136 "nvme_admin": false, 00:08:10.136 "nvme_io": false, 00:08:10.136 "nvme_io_md": false, 00:08:10.136 "write_zeroes": true, 00:08:10.136 "zcopy": false, 00:08:10.136 "get_zone_info": false, 00:08:10.136 "zone_management": false, 00:08:10.136 "zone_append": false, 00:08:10.136 "compare": false, 00:08:10.136 "compare_and_write": false, 00:08:10.136 "abort": false, 00:08:10.136 "seek_hole": false, 00:08:10.136 "seek_data": false, 00:08:10.136 "copy": false, 00:08:10.136 "nvme_iov_md": false 00:08:10.136 }, 00:08:10.136 "memory_domains": [ 00:08:10.136 { 00:08:10.136 "dma_device_id": "system", 00:08:10.136 "dma_device_type": 1 00:08:10.136 }, 00:08:10.136 { 00:08:10.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.136 "dma_device_type": 2 00:08:10.136 }, 00:08:10.136 { 00:08:10.136 "dma_device_id": "system", 00:08:10.136 "dma_device_type": 1 00:08:10.136 }, 00:08:10.136 { 00:08:10.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.136 "dma_device_type": 2 00:08:10.136 } 00:08:10.136 ], 00:08:10.136 "driver_specific": { 00:08:10.136 "raid": { 00:08:10.136 "uuid": "4210b899-8cf4-45e5-96ae-1a8c9838bc39", 00:08:10.136 "strip_size_kb": 0, 00:08:10.136 "state": "online", 00:08:10.136 "raid_level": "raid1", 00:08:10.136 "superblock": false, 00:08:10.136 "num_base_bdevs": 2, 00:08:10.136 "num_base_bdevs_discovered": 2, 00:08:10.136 "num_base_bdevs_operational": 2, 00:08:10.136 "base_bdevs_list": [ 00:08:10.136 { 00:08:10.136 "name": "BaseBdev1", 00:08:10.136 "uuid": "566aa2e1-1e93-40bd-86a4-a0a1bc7fba22", 00:08:10.136 "is_configured": true, 00:08:10.136 "data_offset": 0, 00:08:10.136 "data_size": 65536 00:08:10.136 }, 00:08:10.136 { 00:08:10.136 "name": "BaseBdev2", 00:08:10.136 "uuid": "8a12f3a6-036c-4611-ba40-613dc97cc628", 00:08:10.136 "is_configured": true, 00:08:10.136 "data_offset": 0, 00:08:10.136 "data_size": 65536 00:08:10.136 } 00:08:10.136 ] 00:08:10.136 } 00:08:10.136 } 00:08:10.136 }' 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:10.136 BaseBdev2' 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.136 17:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.136 [2024-11-26 17:52:51.942087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.396 "name": "Existed_Raid", 00:08:10.396 "uuid": "4210b899-8cf4-45e5-96ae-1a8c9838bc39", 00:08:10.396 "strip_size_kb": 0, 00:08:10.396 "state": "online", 00:08:10.396 "raid_level": "raid1", 00:08:10.396 "superblock": false, 00:08:10.396 "num_base_bdevs": 2, 00:08:10.396 "num_base_bdevs_discovered": 1, 00:08:10.396 "num_base_bdevs_operational": 1, 00:08:10.396 "base_bdevs_list": [ 00:08:10.396 { 00:08:10.396 "name": null, 00:08:10.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.396 "is_configured": false, 00:08:10.396 "data_offset": 0, 00:08:10.396 "data_size": 65536 00:08:10.396 }, 00:08:10.396 { 00:08:10.396 "name": "BaseBdev2", 00:08:10.396 "uuid": "8a12f3a6-036c-4611-ba40-613dc97cc628", 00:08:10.396 "is_configured": true, 00:08:10.396 "data_offset": 0, 00:08:10.396 "data_size": 65536 00:08:10.396 } 00:08:10.396 ] 00:08:10.396 }' 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.396 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.656 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:10.656 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.656 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.656 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.656 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.656 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:10.656 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.656 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:10.656 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:10.656 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:10.656 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.656 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.656 [2024-11-26 17:52:52.512033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:10.656 [2024-11-26 17:52:52.512200] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.915 [2024-11-26 17:52:52.609786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.915 [2024-11-26 17:52:52.609971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.915 [2024-11-26 17:52:52.610059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:10.915 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.915 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62883 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62883 ']' 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62883 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62883 00:08:10.916 killing process with pid 62883 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62883' 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62883 00:08:10.916 [2024-11-26 17:52:52.705406] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.916 17:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62883 00:08:10.916 [2024-11-26 17:52:52.724404] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:12.303 00:08:12.303 real 0m5.201s 00:08:12.303 user 0m7.512s 00:08:12.303 sys 0m0.826s 00:08:12.303 ************************************ 00:08:12.303 END TEST raid_state_function_test 00:08:12.303 ************************************ 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.303 17:52:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:12.303 17:52:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:12.303 17:52:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.303 17:52:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.303 ************************************ 00:08:12.303 START TEST raid_state_function_test_sb 00:08:12.303 ************************************ 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63136 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63136' 00:08:12.303 Process raid pid: 63136 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63136 00:08:12.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63136 ']' 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.303 17:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.304 [2024-11-26 17:52:54.063900] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:08:12.304 [2024-11-26 17:52:54.064037] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.564 [2024-11-26 17:52:54.242957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.564 [2024-11-26 17:52:54.375105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.823 [2024-11-26 17:52:54.596667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.823 [2024-11-26 17:52:54.596725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.082 [2024-11-26 17:52:54.908961] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.082 [2024-11-26 17:52:54.909041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.082 [2024-11-26 17:52:54.909057] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.082 [2024-11-26 17:52:54.909070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.082 17:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.342 17:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.342 "name": "Existed_Raid", 00:08:13.342 "uuid": "77e0b83a-1fbe-4900-b175-023260db10de", 00:08:13.342 "strip_size_kb": 0, 00:08:13.342 "state": "configuring", 00:08:13.342 "raid_level": "raid1", 00:08:13.342 "superblock": true, 00:08:13.342 "num_base_bdevs": 2, 00:08:13.342 "num_base_bdevs_discovered": 0, 00:08:13.342 "num_base_bdevs_operational": 2, 00:08:13.342 "base_bdevs_list": [ 00:08:13.342 { 00:08:13.342 "name": "BaseBdev1", 00:08:13.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.342 "is_configured": false, 00:08:13.342 "data_offset": 0, 00:08:13.342 "data_size": 0 00:08:13.342 }, 00:08:13.342 { 00:08:13.342 "name": "BaseBdev2", 00:08:13.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.342 "is_configured": false, 00:08:13.342 "data_offset": 0, 00:08:13.342 "data_size": 0 00:08:13.342 } 00:08:13.342 ] 00:08:13.342 }' 00:08:13.342 17:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.342 17:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.602 [2024-11-26 17:52:55.348212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:13.602 [2024-11-26 17:52:55.348326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.602 [2024-11-26 17:52:55.356234] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.602 [2024-11-26 17:52:55.356346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.602 [2024-11-26 17:52:55.356378] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.602 [2024-11-26 17:52:55.356405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.602 [2024-11-26 17:52:55.402990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.602 BaseBdev1 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.602 [ 00:08:13.602 { 00:08:13.602 "name": "BaseBdev1", 00:08:13.602 "aliases": [ 00:08:13.602 "4f20ef41-e333-4822-bd9d-1ccfdc00a40a" 00:08:13.602 ], 00:08:13.602 "product_name": "Malloc disk", 00:08:13.602 "block_size": 512, 00:08:13.602 "num_blocks": 65536, 00:08:13.602 "uuid": "4f20ef41-e333-4822-bd9d-1ccfdc00a40a", 00:08:13.602 "assigned_rate_limits": { 00:08:13.602 "rw_ios_per_sec": 0, 00:08:13.602 "rw_mbytes_per_sec": 0, 00:08:13.602 "r_mbytes_per_sec": 0, 00:08:13.602 "w_mbytes_per_sec": 0 00:08:13.602 }, 00:08:13.602 "claimed": true, 00:08:13.602 "claim_type": "exclusive_write", 00:08:13.602 "zoned": false, 00:08:13.602 "supported_io_types": { 00:08:13.602 "read": true, 00:08:13.602 "write": true, 00:08:13.602 "unmap": true, 00:08:13.602 "flush": true, 00:08:13.602 "reset": true, 00:08:13.602 "nvme_admin": false, 00:08:13.602 "nvme_io": false, 00:08:13.602 "nvme_io_md": false, 00:08:13.602 "write_zeroes": true, 00:08:13.602 "zcopy": true, 00:08:13.602 "get_zone_info": false, 00:08:13.602 "zone_management": false, 00:08:13.602 "zone_append": false, 00:08:13.602 "compare": false, 00:08:13.602 "compare_and_write": false, 00:08:13.602 "abort": true, 00:08:13.602 "seek_hole": false, 00:08:13.602 "seek_data": false, 00:08:13.602 "copy": true, 00:08:13.602 "nvme_iov_md": false 00:08:13.602 }, 00:08:13.602 "memory_domains": [ 00:08:13.602 { 00:08:13.602 "dma_device_id": "system", 00:08:13.602 "dma_device_type": 1 00:08:13.602 }, 00:08:13.602 { 00:08:13.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.602 "dma_device_type": 2 00:08:13.602 } 00:08:13.602 ], 00:08:13.602 "driver_specific": {} 00:08:13.602 } 00:08:13.602 ] 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.602 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.603 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.603 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.603 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.603 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.603 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.603 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.603 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.603 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.603 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.603 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.862 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.862 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.862 "name": "Existed_Raid", 00:08:13.862 "uuid": "461f24a2-9377-4d8e-bc2a-25558b9a94c5", 00:08:13.862 "strip_size_kb": 0, 00:08:13.862 "state": "configuring", 00:08:13.862 "raid_level": "raid1", 00:08:13.862 "superblock": true, 00:08:13.862 "num_base_bdevs": 2, 00:08:13.862 "num_base_bdevs_discovered": 1, 00:08:13.862 "num_base_bdevs_operational": 2, 00:08:13.862 "base_bdevs_list": [ 00:08:13.862 { 00:08:13.862 "name": "BaseBdev1", 00:08:13.862 "uuid": "4f20ef41-e333-4822-bd9d-1ccfdc00a40a", 00:08:13.862 "is_configured": true, 00:08:13.862 "data_offset": 2048, 00:08:13.862 "data_size": 63488 00:08:13.862 }, 00:08:13.862 { 00:08:13.862 "name": "BaseBdev2", 00:08:13.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.862 "is_configured": false, 00:08:13.862 "data_offset": 0, 00:08:13.862 "data_size": 0 00:08:13.862 } 00:08:13.862 ] 00:08:13.862 }' 00:08:13.862 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.862 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.122 [2024-11-26 17:52:55.894196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.122 [2024-11-26 17:52:55.894254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.122 [2024-11-26 17:52:55.902214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.122 [2024-11-26 17:52:55.904212] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.122 [2024-11-26 17:52:55.904254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.122 "name": "Existed_Raid", 00:08:14.122 "uuid": "766139d6-1ae1-43a1-aadd-3b29ba87dbec", 00:08:14.122 "strip_size_kb": 0, 00:08:14.122 "state": "configuring", 00:08:14.122 "raid_level": "raid1", 00:08:14.122 "superblock": true, 00:08:14.122 "num_base_bdevs": 2, 00:08:14.122 "num_base_bdevs_discovered": 1, 00:08:14.122 "num_base_bdevs_operational": 2, 00:08:14.122 "base_bdevs_list": [ 00:08:14.122 { 00:08:14.122 "name": "BaseBdev1", 00:08:14.122 "uuid": "4f20ef41-e333-4822-bd9d-1ccfdc00a40a", 00:08:14.122 "is_configured": true, 00:08:14.122 "data_offset": 2048, 00:08:14.122 "data_size": 63488 00:08:14.122 }, 00:08:14.122 { 00:08:14.122 "name": "BaseBdev2", 00:08:14.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.122 "is_configured": false, 00:08:14.122 "data_offset": 0, 00:08:14.122 "data_size": 0 00:08:14.122 } 00:08:14.122 ] 00:08:14.122 }' 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.122 17:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.692 [2024-11-26 17:52:56.376179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:14.692 [2024-11-26 17:52:56.376578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:14.692 [2024-11-26 17:52:56.376654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:14.692 [2024-11-26 17:52:56.377075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:14.692 BaseBdev2 00:08:14.692 [2024-11-26 17:52:56.377370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:14.692 [2024-11-26 17:52:56.377439] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:14.692 [2024-11-26 17:52:56.377720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.692 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.692 [ 00:08:14.692 { 00:08:14.692 "name": "BaseBdev2", 00:08:14.692 "aliases": [ 00:08:14.692 "f4b2fb76-9703-48e9-bb87-68994e0f2c68" 00:08:14.692 ], 00:08:14.692 "product_name": "Malloc disk", 00:08:14.692 "block_size": 512, 00:08:14.692 "num_blocks": 65536, 00:08:14.692 "uuid": "f4b2fb76-9703-48e9-bb87-68994e0f2c68", 00:08:14.692 "assigned_rate_limits": { 00:08:14.692 "rw_ios_per_sec": 0, 00:08:14.692 "rw_mbytes_per_sec": 0, 00:08:14.692 "r_mbytes_per_sec": 0, 00:08:14.692 "w_mbytes_per_sec": 0 00:08:14.692 }, 00:08:14.692 "claimed": true, 00:08:14.692 "claim_type": "exclusive_write", 00:08:14.692 "zoned": false, 00:08:14.692 "supported_io_types": { 00:08:14.692 "read": true, 00:08:14.692 "write": true, 00:08:14.692 "unmap": true, 00:08:14.692 "flush": true, 00:08:14.692 "reset": true, 00:08:14.692 "nvme_admin": false, 00:08:14.692 "nvme_io": false, 00:08:14.692 "nvme_io_md": false, 00:08:14.692 "write_zeroes": true, 00:08:14.693 "zcopy": true, 00:08:14.693 "get_zone_info": false, 00:08:14.693 "zone_management": false, 00:08:14.693 "zone_append": false, 00:08:14.693 "compare": false, 00:08:14.693 "compare_and_write": false, 00:08:14.693 "abort": true, 00:08:14.693 "seek_hole": false, 00:08:14.693 "seek_data": false, 00:08:14.693 "copy": true, 00:08:14.693 "nvme_iov_md": false 00:08:14.693 }, 00:08:14.693 "memory_domains": [ 00:08:14.693 { 00:08:14.693 "dma_device_id": "system", 00:08:14.693 "dma_device_type": 1 00:08:14.693 }, 00:08:14.693 { 00:08:14.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.693 "dma_device_type": 2 00:08:14.693 } 00:08:14.693 ], 00:08:14.693 "driver_specific": {} 00:08:14.693 } 00:08:14.693 ] 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.693 "name": "Existed_Raid", 00:08:14.693 "uuid": "766139d6-1ae1-43a1-aadd-3b29ba87dbec", 00:08:14.693 "strip_size_kb": 0, 00:08:14.693 "state": "online", 00:08:14.693 "raid_level": "raid1", 00:08:14.693 "superblock": true, 00:08:14.693 "num_base_bdevs": 2, 00:08:14.693 "num_base_bdevs_discovered": 2, 00:08:14.693 "num_base_bdevs_operational": 2, 00:08:14.693 "base_bdevs_list": [ 00:08:14.693 { 00:08:14.693 "name": "BaseBdev1", 00:08:14.693 "uuid": "4f20ef41-e333-4822-bd9d-1ccfdc00a40a", 00:08:14.693 "is_configured": true, 00:08:14.693 "data_offset": 2048, 00:08:14.693 "data_size": 63488 00:08:14.693 }, 00:08:14.693 { 00:08:14.693 "name": "BaseBdev2", 00:08:14.693 "uuid": "f4b2fb76-9703-48e9-bb87-68994e0f2c68", 00:08:14.693 "is_configured": true, 00:08:14.693 "data_offset": 2048, 00:08:14.693 "data_size": 63488 00:08:14.693 } 00:08:14.693 ] 00:08:14.693 }' 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.693 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.261 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:15.261 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:15.261 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.261 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.261 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.261 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.261 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.261 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:15.261 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.261 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.261 [2024-11-26 17:52:56.859682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.261 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.261 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.261 "name": "Existed_Raid", 00:08:15.261 "aliases": [ 00:08:15.261 "766139d6-1ae1-43a1-aadd-3b29ba87dbec" 00:08:15.261 ], 00:08:15.261 "product_name": "Raid Volume", 00:08:15.261 "block_size": 512, 00:08:15.261 "num_blocks": 63488, 00:08:15.261 "uuid": "766139d6-1ae1-43a1-aadd-3b29ba87dbec", 00:08:15.261 "assigned_rate_limits": { 00:08:15.261 "rw_ios_per_sec": 0, 00:08:15.261 "rw_mbytes_per_sec": 0, 00:08:15.261 "r_mbytes_per_sec": 0, 00:08:15.261 "w_mbytes_per_sec": 0 00:08:15.261 }, 00:08:15.261 "claimed": false, 00:08:15.261 "zoned": false, 00:08:15.261 "supported_io_types": { 00:08:15.261 "read": true, 00:08:15.261 "write": true, 00:08:15.261 "unmap": false, 00:08:15.261 "flush": false, 00:08:15.261 "reset": true, 00:08:15.261 "nvme_admin": false, 00:08:15.261 "nvme_io": false, 00:08:15.261 "nvme_io_md": false, 00:08:15.261 "write_zeroes": true, 00:08:15.261 "zcopy": false, 00:08:15.261 "get_zone_info": false, 00:08:15.261 "zone_management": false, 00:08:15.261 "zone_append": false, 00:08:15.261 "compare": false, 00:08:15.261 "compare_and_write": false, 00:08:15.262 "abort": false, 00:08:15.262 "seek_hole": false, 00:08:15.262 "seek_data": false, 00:08:15.262 "copy": false, 00:08:15.262 "nvme_iov_md": false 00:08:15.262 }, 00:08:15.262 "memory_domains": [ 00:08:15.262 { 00:08:15.262 "dma_device_id": "system", 00:08:15.262 "dma_device_type": 1 00:08:15.262 }, 00:08:15.262 { 00:08:15.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.262 "dma_device_type": 2 00:08:15.262 }, 00:08:15.262 { 00:08:15.262 "dma_device_id": "system", 00:08:15.262 "dma_device_type": 1 00:08:15.262 }, 00:08:15.262 { 00:08:15.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.262 "dma_device_type": 2 00:08:15.262 } 00:08:15.262 ], 00:08:15.262 "driver_specific": { 00:08:15.262 "raid": { 00:08:15.262 "uuid": "766139d6-1ae1-43a1-aadd-3b29ba87dbec", 00:08:15.262 "strip_size_kb": 0, 00:08:15.262 "state": "online", 00:08:15.262 "raid_level": "raid1", 00:08:15.262 "superblock": true, 00:08:15.262 "num_base_bdevs": 2, 00:08:15.262 "num_base_bdevs_discovered": 2, 00:08:15.262 "num_base_bdevs_operational": 2, 00:08:15.262 "base_bdevs_list": [ 00:08:15.262 { 00:08:15.262 "name": "BaseBdev1", 00:08:15.262 "uuid": "4f20ef41-e333-4822-bd9d-1ccfdc00a40a", 00:08:15.262 "is_configured": true, 00:08:15.262 "data_offset": 2048, 00:08:15.262 "data_size": 63488 00:08:15.262 }, 00:08:15.262 { 00:08:15.262 "name": "BaseBdev2", 00:08:15.262 "uuid": "f4b2fb76-9703-48e9-bb87-68994e0f2c68", 00:08:15.262 "is_configured": true, 00:08:15.262 "data_offset": 2048, 00:08:15.262 "data_size": 63488 00:08:15.262 } 00:08:15.262 ] 00:08:15.262 } 00:08:15.262 } 00:08:15.262 }' 00:08:15.262 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.262 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:15.262 BaseBdev2' 00:08:15.262 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.262 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.262 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.262 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:15.262 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.262 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.262 17:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.262 17:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.262 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.262 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.262 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.262 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:15.262 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.262 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.262 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.262 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.262 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.262 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.262 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:15.262 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.262 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.262 [2024-11-26 17:52:57.091156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.521 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.522 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.522 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.522 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.522 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.522 "name": "Existed_Raid", 00:08:15.522 "uuid": "766139d6-1ae1-43a1-aadd-3b29ba87dbec", 00:08:15.522 "strip_size_kb": 0, 00:08:15.522 "state": "online", 00:08:15.522 "raid_level": "raid1", 00:08:15.522 "superblock": true, 00:08:15.522 "num_base_bdevs": 2, 00:08:15.522 "num_base_bdevs_discovered": 1, 00:08:15.522 "num_base_bdevs_operational": 1, 00:08:15.522 "base_bdevs_list": [ 00:08:15.522 { 00:08:15.522 "name": null, 00:08:15.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.522 "is_configured": false, 00:08:15.522 "data_offset": 0, 00:08:15.522 "data_size": 63488 00:08:15.522 }, 00:08:15.522 { 00:08:15.522 "name": "BaseBdev2", 00:08:15.522 "uuid": "f4b2fb76-9703-48e9-bb87-68994e0f2c68", 00:08:15.522 "is_configured": true, 00:08:15.522 "data_offset": 2048, 00:08:15.522 "data_size": 63488 00:08:15.522 } 00:08:15.522 ] 00:08:15.522 }' 00:08:15.522 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.522 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.782 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:15.782 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:15.782 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.782 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.782 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.782 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:15.782 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.042 [2024-11-26 17:52:57.658275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:16.042 [2024-11-26 17:52:57.658467] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.042 [2024-11-26 17:52:57.763717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.042 [2024-11-26 17:52:57.763856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.042 [2024-11-26 17:52:57.763934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63136 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63136 ']' 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63136 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63136 00:08:16.042 killing process with pid 63136 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63136' 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63136 00:08:16.042 [2024-11-26 17:52:57.857717] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.042 17:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63136 00:08:16.042 [2024-11-26 17:52:57.875352] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.429 17:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:17.429 00:08:17.429 real 0m5.082s 00:08:17.429 user 0m7.307s 00:08:17.429 sys 0m0.806s 00:08:17.429 17:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.429 ************************************ 00:08:17.429 END TEST raid_state_function_test_sb 00:08:17.429 ************************************ 00:08:17.429 17:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.429 17:52:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:17.429 17:52:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:17.429 17:52:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.429 17:52:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.429 ************************************ 00:08:17.429 START TEST raid_superblock_test 00:08:17.429 ************************************ 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63383 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63383 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63383 ']' 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.429 17:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.429 [2024-11-26 17:52:59.205132] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:08:17.429 [2024-11-26 17:52:59.205270] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63383 ] 00:08:17.688 [2024-11-26 17:52:59.362118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.688 [2024-11-26 17:52:59.487136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.947 [2024-11-26 17:52:59.704538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.947 [2024-11-26 17:52:59.704613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.517 malloc1 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.517 [2024-11-26 17:53:00.147127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:18.517 [2024-11-26 17:53:00.147258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.517 [2024-11-26 17:53:00.147307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:18.517 [2024-11-26 17:53:00.147360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.517 [2024-11-26 17:53:00.149688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.517 [2024-11-26 17:53:00.149766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:18.517 pt1 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:18.517 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.518 malloc2 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.518 [2024-11-26 17:53:00.203273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:18.518 [2024-11-26 17:53:00.203334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.518 [2024-11-26 17:53:00.203361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:18.518 [2024-11-26 17:53:00.203370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.518 [2024-11-26 17:53:00.205665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.518 [2024-11-26 17:53:00.205707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:18.518 pt2 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.518 [2024-11-26 17:53:00.215296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:18.518 [2024-11-26 17:53:00.217109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:18.518 [2024-11-26 17:53:00.217261] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:18.518 [2024-11-26 17:53:00.217278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:18.518 [2024-11-26 17:53:00.217529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:18.518 [2024-11-26 17:53:00.217691] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:18.518 [2024-11-26 17:53:00.217706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:18.518 [2024-11-26 17:53:00.217852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.518 "name": "raid_bdev1", 00:08:18.518 "uuid": "699c1244-e86b-4bc4-9348-e1a345f7bec8", 00:08:18.518 "strip_size_kb": 0, 00:08:18.518 "state": "online", 00:08:18.518 "raid_level": "raid1", 00:08:18.518 "superblock": true, 00:08:18.518 "num_base_bdevs": 2, 00:08:18.518 "num_base_bdevs_discovered": 2, 00:08:18.518 "num_base_bdevs_operational": 2, 00:08:18.518 "base_bdevs_list": [ 00:08:18.518 { 00:08:18.518 "name": "pt1", 00:08:18.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.518 "is_configured": true, 00:08:18.518 "data_offset": 2048, 00:08:18.518 "data_size": 63488 00:08:18.518 }, 00:08:18.518 { 00:08:18.518 "name": "pt2", 00:08:18.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.518 "is_configured": true, 00:08:18.518 "data_offset": 2048, 00:08:18.518 "data_size": 63488 00:08:18.518 } 00:08:18.518 ] 00:08:18.518 }' 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.518 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.114 [2024-11-26 17:53:00.710792] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:19.114 "name": "raid_bdev1", 00:08:19.114 "aliases": [ 00:08:19.114 "699c1244-e86b-4bc4-9348-e1a345f7bec8" 00:08:19.114 ], 00:08:19.114 "product_name": "Raid Volume", 00:08:19.114 "block_size": 512, 00:08:19.114 "num_blocks": 63488, 00:08:19.114 "uuid": "699c1244-e86b-4bc4-9348-e1a345f7bec8", 00:08:19.114 "assigned_rate_limits": { 00:08:19.114 "rw_ios_per_sec": 0, 00:08:19.114 "rw_mbytes_per_sec": 0, 00:08:19.114 "r_mbytes_per_sec": 0, 00:08:19.114 "w_mbytes_per_sec": 0 00:08:19.114 }, 00:08:19.114 "claimed": false, 00:08:19.114 "zoned": false, 00:08:19.114 "supported_io_types": { 00:08:19.114 "read": true, 00:08:19.114 "write": true, 00:08:19.114 "unmap": false, 00:08:19.114 "flush": false, 00:08:19.114 "reset": true, 00:08:19.114 "nvme_admin": false, 00:08:19.114 "nvme_io": false, 00:08:19.114 "nvme_io_md": false, 00:08:19.114 "write_zeroes": true, 00:08:19.114 "zcopy": false, 00:08:19.114 "get_zone_info": false, 00:08:19.114 "zone_management": false, 00:08:19.114 "zone_append": false, 00:08:19.114 "compare": false, 00:08:19.114 "compare_and_write": false, 00:08:19.114 "abort": false, 00:08:19.114 "seek_hole": false, 00:08:19.114 "seek_data": false, 00:08:19.114 "copy": false, 00:08:19.114 "nvme_iov_md": false 00:08:19.114 }, 00:08:19.114 "memory_domains": [ 00:08:19.114 { 00:08:19.114 "dma_device_id": "system", 00:08:19.114 "dma_device_type": 1 00:08:19.114 }, 00:08:19.114 { 00:08:19.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.114 "dma_device_type": 2 00:08:19.114 }, 00:08:19.114 { 00:08:19.114 "dma_device_id": "system", 00:08:19.114 "dma_device_type": 1 00:08:19.114 }, 00:08:19.114 { 00:08:19.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.114 "dma_device_type": 2 00:08:19.114 } 00:08:19.114 ], 00:08:19.114 "driver_specific": { 00:08:19.114 "raid": { 00:08:19.114 "uuid": "699c1244-e86b-4bc4-9348-e1a345f7bec8", 00:08:19.114 "strip_size_kb": 0, 00:08:19.114 "state": "online", 00:08:19.114 "raid_level": "raid1", 00:08:19.114 "superblock": true, 00:08:19.114 "num_base_bdevs": 2, 00:08:19.114 "num_base_bdevs_discovered": 2, 00:08:19.114 "num_base_bdevs_operational": 2, 00:08:19.114 "base_bdevs_list": [ 00:08:19.114 { 00:08:19.114 "name": "pt1", 00:08:19.114 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.114 "is_configured": true, 00:08:19.114 "data_offset": 2048, 00:08:19.114 "data_size": 63488 00:08:19.114 }, 00:08:19.114 { 00:08:19.114 "name": "pt2", 00:08:19.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.114 "is_configured": true, 00:08:19.114 "data_offset": 2048, 00:08:19.114 "data_size": 63488 00:08:19.114 } 00:08:19.114 ] 00:08:19.114 } 00:08:19.114 } 00:08:19.114 }' 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:19.114 pt2' 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.114 17:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:19.115 [2024-11-26 17:53:00.966415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.374 17:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.374 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=699c1244-e86b-4bc4-9348-e1a345f7bec8 00:08:19.374 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 699c1244-e86b-4bc4-9348-e1a345f7bec8 ']' 00:08:19.374 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.374 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.375 [2024-11-26 17:53:01.017931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.375 [2024-11-26 17:53:01.017966] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.375 [2024-11-26 17:53:01.018085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.375 [2024-11-26 17:53:01.018165] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.375 [2024-11-26 17:53:01.018179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.375 [2024-11-26 17:53:01.161789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:19.375 [2024-11-26 17:53:01.163998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:19.375 [2024-11-26 17:53:01.164091] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:19.375 [2024-11-26 17:53:01.164173] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:19.375 [2024-11-26 17:53:01.164191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.375 [2024-11-26 17:53:01.164203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:19.375 request: 00:08:19.375 { 00:08:19.375 "name": "raid_bdev1", 00:08:19.375 "raid_level": "raid1", 00:08:19.375 "base_bdevs": [ 00:08:19.375 "malloc1", 00:08:19.375 "malloc2" 00:08:19.375 ], 00:08:19.375 "superblock": false, 00:08:19.375 "method": "bdev_raid_create", 00:08:19.375 "req_id": 1 00:08:19.375 } 00:08:19.375 Got JSON-RPC error response 00:08:19.375 response: 00:08:19.375 { 00:08:19.375 "code": -17, 00:08:19.375 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:19.375 } 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.375 [2024-11-26 17:53:01.225608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:19.375 [2024-11-26 17:53:01.225752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.375 [2024-11-26 17:53:01.225800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:19.375 [2024-11-26 17:53:01.225834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.375 [2024-11-26 17:53:01.228184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.375 [2024-11-26 17:53:01.228280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:19.375 [2024-11-26 17:53:01.228412] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:19.375 [2024-11-26 17:53:01.228501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:19.375 pt1 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.375 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.635 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.635 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.635 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.635 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.635 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.635 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.635 "name": "raid_bdev1", 00:08:19.635 "uuid": "699c1244-e86b-4bc4-9348-e1a345f7bec8", 00:08:19.635 "strip_size_kb": 0, 00:08:19.635 "state": "configuring", 00:08:19.635 "raid_level": "raid1", 00:08:19.635 "superblock": true, 00:08:19.635 "num_base_bdevs": 2, 00:08:19.635 "num_base_bdevs_discovered": 1, 00:08:19.635 "num_base_bdevs_operational": 2, 00:08:19.635 "base_bdevs_list": [ 00:08:19.635 { 00:08:19.635 "name": "pt1", 00:08:19.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.635 "is_configured": true, 00:08:19.635 "data_offset": 2048, 00:08:19.635 "data_size": 63488 00:08:19.635 }, 00:08:19.635 { 00:08:19.635 "name": null, 00:08:19.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.635 "is_configured": false, 00:08:19.635 "data_offset": 2048, 00:08:19.635 "data_size": 63488 00:08:19.635 } 00:08:19.635 ] 00:08:19.635 }' 00:08:19.635 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.635 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.894 [2024-11-26 17:53:01.700883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:19.894 [2024-11-26 17:53:01.701063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.894 [2024-11-26 17:53:01.701113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:19.894 [2024-11-26 17:53:01.701156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.894 [2024-11-26 17:53:01.701725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.894 [2024-11-26 17:53:01.701805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:19.894 [2024-11-26 17:53:01.701936] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:19.894 [2024-11-26 17:53:01.702000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:19.894 [2024-11-26 17:53:01.702206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:19.894 [2024-11-26 17:53:01.702253] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:19.894 [2024-11-26 17:53:01.702576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:19.894 [2024-11-26 17:53:01.702811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:19.894 [2024-11-26 17:53:01.702858] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:19.894 [2024-11-26 17:53:01.703100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.894 pt2 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.894 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.895 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.895 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.895 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.895 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.895 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.895 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.895 "name": "raid_bdev1", 00:08:19.895 "uuid": "699c1244-e86b-4bc4-9348-e1a345f7bec8", 00:08:19.895 "strip_size_kb": 0, 00:08:19.895 "state": "online", 00:08:19.895 "raid_level": "raid1", 00:08:19.895 "superblock": true, 00:08:19.895 "num_base_bdevs": 2, 00:08:19.895 "num_base_bdevs_discovered": 2, 00:08:19.895 "num_base_bdevs_operational": 2, 00:08:19.895 "base_bdevs_list": [ 00:08:19.895 { 00:08:19.895 "name": "pt1", 00:08:19.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.895 "is_configured": true, 00:08:19.895 "data_offset": 2048, 00:08:19.895 "data_size": 63488 00:08:19.895 }, 00:08:19.895 { 00:08:19.895 "name": "pt2", 00:08:19.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.895 "is_configured": true, 00:08:19.895 "data_offset": 2048, 00:08:19.895 "data_size": 63488 00:08:19.895 } 00:08:19.895 ] 00:08:19.895 }' 00:08:19.895 17:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.154 17:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.414 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:20.414 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:20.414 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:20.414 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:20.414 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:20.414 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:20.414 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.414 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.414 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.414 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:20.414 [2024-11-26 17:53:02.168349] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.414 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.414 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.414 "name": "raid_bdev1", 00:08:20.414 "aliases": [ 00:08:20.414 "699c1244-e86b-4bc4-9348-e1a345f7bec8" 00:08:20.414 ], 00:08:20.414 "product_name": "Raid Volume", 00:08:20.414 "block_size": 512, 00:08:20.414 "num_blocks": 63488, 00:08:20.414 "uuid": "699c1244-e86b-4bc4-9348-e1a345f7bec8", 00:08:20.414 "assigned_rate_limits": { 00:08:20.414 "rw_ios_per_sec": 0, 00:08:20.414 "rw_mbytes_per_sec": 0, 00:08:20.414 "r_mbytes_per_sec": 0, 00:08:20.414 "w_mbytes_per_sec": 0 00:08:20.414 }, 00:08:20.414 "claimed": false, 00:08:20.414 "zoned": false, 00:08:20.414 "supported_io_types": { 00:08:20.414 "read": true, 00:08:20.414 "write": true, 00:08:20.414 "unmap": false, 00:08:20.414 "flush": false, 00:08:20.414 "reset": true, 00:08:20.414 "nvme_admin": false, 00:08:20.414 "nvme_io": false, 00:08:20.414 "nvme_io_md": false, 00:08:20.414 "write_zeroes": true, 00:08:20.414 "zcopy": false, 00:08:20.414 "get_zone_info": false, 00:08:20.414 "zone_management": false, 00:08:20.414 "zone_append": false, 00:08:20.414 "compare": false, 00:08:20.414 "compare_and_write": false, 00:08:20.414 "abort": false, 00:08:20.414 "seek_hole": false, 00:08:20.414 "seek_data": false, 00:08:20.414 "copy": false, 00:08:20.414 "nvme_iov_md": false 00:08:20.414 }, 00:08:20.414 "memory_domains": [ 00:08:20.414 { 00:08:20.414 "dma_device_id": "system", 00:08:20.414 "dma_device_type": 1 00:08:20.414 }, 00:08:20.414 { 00:08:20.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.414 "dma_device_type": 2 00:08:20.414 }, 00:08:20.414 { 00:08:20.414 "dma_device_id": "system", 00:08:20.414 "dma_device_type": 1 00:08:20.414 }, 00:08:20.414 { 00:08:20.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.414 "dma_device_type": 2 00:08:20.414 } 00:08:20.414 ], 00:08:20.414 "driver_specific": { 00:08:20.414 "raid": { 00:08:20.414 "uuid": "699c1244-e86b-4bc4-9348-e1a345f7bec8", 00:08:20.414 "strip_size_kb": 0, 00:08:20.414 "state": "online", 00:08:20.414 "raid_level": "raid1", 00:08:20.414 "superblock": true, 00:08:20.414 "num_base_bdevs": 2, 00:08:20.414 "num_base_bdevs_discovered": 2, 00:08:20.414 "num_base_bdevs_operational": 2, 00:08:20.414 "base_bdevs_list": [ 00:08:20.414 { 00:08:20.414 "name": "pt1", 00:08:20.414 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.414 "is_configured": true, 00:08:20.414 "data_offset": 2048, 00:08:20.414 "data_size": 63488 00:08:20.414 }, 00:08:20.414 { 00:08:20.414 "name": "pt2", 00:08:20.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.414 "is_configured": true, 00:08:20.414 "data_offset": 2048, 00:08:20.414 "data_size": 63488 00:08:20.414 } 00:08:20.414 ] 00:08:20.414 } 00:08:20.414 } 00:08:20.414 }' 00:08:20.414 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.414 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:20.414 pt2' 00:08:20.414 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:20.674 [2024-11-26 17:53:02.423927] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 699c1244-e86b-4bc4-9348-e1a345f7bec8 '!=' 699c1244-e86b-4bc4-9348-e1a345f7bec8 ']' 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.674 [2024-11-26 17:53:02.471610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.674 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.675 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.675 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.675 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.675 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.675 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.675 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.675 "name": "raid_bdev1", 00:08:20.675 "uuid": "699c1244-e86b-4bc4-9348-e1a345f7bec8", 00:08:20.675 "strip_size_kb": 0, 00:08:20.675 "state": "online", 00:08:20.675 "raid_level": "raid1", 00:08:20.675 "superblock": true, 00:08:20.675 "num_base_bdevs": 2, 00:08:20.675 "num_base_bdevs_discovered": 1, 00:08:20.675 "num_base_bdevs_operational": 1, 00:08:20.675 "base_bdevs_list": [ 00:08:20.675 { 00:08:20.675 "name": null, 00:08:20.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.675 "is_configured": false, 00:08:20.675 "data_offset": 0, 00:08:20.675 "data_size": 63488 00:08:20.675 }, 00:08:20.675 { 00:08:20.675 "name": "pt2", 00:08:20.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.675 "is_configured": true, 00:08:20.675 "data_offset": 2048, 00:08:20.675 "data_size": 63488 00:08:20.675 } 00:08:20.675 ] 00:08:20.675 }' 00:08:20.675 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.675 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.247 [2024-11-26 17:53:02.922789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.247 [2024-11-26 17:53:02.922864] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.247 [2024-11-26 17:53:02.922981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.247 [2024-11-26 17:53:02.923077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.247 [2024-11-26 17:53:02.923131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.247 [2024-11-26 17:53:02.982691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:21.247 [2024-11-26 17:53:02.982762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.247 [2024-11-26 17:53:02.982782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:21.247 [2024-11-26 17:53:02.982795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.247 [2024-11-26 17:53:02.985353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.247 [2024-11-26 17:53:02.985400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:21.247 [2024-11-26 17:53:02.985494] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:21.247 [2024-11-26 17:53:02.985548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:21.247 [2024-11-26 17:53:02.985679] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:21.247 [2024-11-26 17:53:02.985694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:21.247 [2024-11-26 17:53:02.985972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:21.247 [2024-11-26 17:53:02.986169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:21.247 [2024-11-26 17:53:02.986181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:21.247 [2024-11-26 17:53:02.986366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.247 pt2 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.247 17:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.247 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.247 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.247 "name": "raid_bdev1", 00:08:21.247 "uuid": "699c1244-e86b-4bc4-9348-e1a345f7bec8", 00:08:21.247 "strip_size_kb": 0, 00:08:21.247 "state": "online", 00:08:21.247 "raid_level": "raid1", 00:08:21.247 "superblock": true, 00:08:21.247 "num_base_bdevs": 2, 00:08:21.247 "num_base_bdevs_discovered": 1, 00:08:21.247 "num_base_bdevs_operational": 1, 00:08:21.247 "base_bdevs_list": [ 00:08:21.247 { 00:08:21.247 "name": null, 00:08:21.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.247 "is_configured": false, 00:08:21.247 "data_offset": 2048, 00:08:21.247 "data_size": 63488 00:08:21.247 }, 00:08:21.247 { 00:08:21.247 "name": "pt2", 00:08:21.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.247 "is_configured": true, 00:08:21.247 "data_offset": 2048, 00:08:21.247 "data_size": 63488 00:08:21.247 } 00:08:21.247 ] 00:08:21.247 }' 00:08:21.247 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.247 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.866 [2024-11-26 17:53:03.405937] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.866 [2024-11-26 17:53:03.406031] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.866 [2024-11-26 17:53:03.406147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.866 [2024-11-26 17:53:03.406224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.866 [2024-11-26 17:53:03.406281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.866 [2024-11-26 17:53:03.465879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:21.866 [2024-11-26 17:53:03.465957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.866 [2024-11-26 17:53:03.465981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:21.866 [2024-11-26 17:53:03.465992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.866 [2024-11-26 17:53:03.468579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.866 [2024-11-26 17:53:03.468620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:21.866 [2024-11-26 17:53:03.468741] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:21.866 [2024-11-26 17:53:03.468796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:21.866 [2024-11-26 17:53:03.468990] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:21.866 [2024-11-26 17:53:03.469005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.866 [2024-11-26 17:53:03.469044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:21.866 [2024-11-26 17:53:03.469114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:21.866 [2024-11-26 17:53:03.469199] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:21.866 [2024-11-26 17:53:03.469209] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:21.866 [2024-11-26 17:53:03.469495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:21.866 [2024-11-26 17:53:03.469677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:21.866 [2024-11-26 17:53:03.469693] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:21.866 [2024-11-26 17:53:03.469906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.866 pt1 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.866 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.867 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.867 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.867 "name": "raid_bdev1", 00:08:21.867 "uuid": "699c1244-e86b-4bc4-9348-e1a345f7bec8", 00:08:21.867 "strip_size_kb": 0, 00:08:21.867 "state": "online", 00:08:21.867 "raid_level": "raid1", 00:08:21.867 "superblock": true, 00:08:21.867 "num_base_bdevs": 2, 00:08:21.867 "num_base_bdevs_discovered": 1, 00:08:21.867 "num_base_bdevs_operational": 1, 00:08:21.867 "base_bdevs_list": [ 00:08:21.867 { 00:08:21.867 "name": null, 00:08:21.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.867 "is_configured": false, 00:08:21.867 "data_offset": 2048, 00:08:21.867 "data_size": 63488 00:08:21.867 }, 00:08:21.867 { 00:08:21.867 "name": "pt2", 00:08:21.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.867 "is_configured": true, 00:08:21.867 "data_offset": 2048, 00:08:21.867 "data_size": 63488 00:08:21.867 } 00:08:21.867 ] 00:08:21.867 }' 00:08:21.867 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.867 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.126 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:22.126 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:22.126 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.126 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.126 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.126 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:22.126 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:22.126 17:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:22.126 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.126 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.126 [2024-11-26 17:53:03.977327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.384 17:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.384 17:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 699c1244-e86b-4bc4-9348-e1a345f7bec8 '!=' 699c1244-e86b-4bc4-9348-e1a345f7bec8 ']' 00:08:22.384 17:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63383 00:08:22.384 17:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63383 ']' 00:08:22.384 17:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63383 00:08:22.384 17:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:22.384 17:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.384 17:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63383 00:08:22.384 17:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:22.384 17:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:22.384 17:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63383' 00:08:22.384 killing process with pid 63383 00:08:22.385 17:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63383 00:08:22.385 [2024-11-26 17:53:04.064474] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.385 [2024-11-26 17:53:04.064592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.385 [2024-11-26 17:53:04.064649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.385 [2024-11-26 17:53:04.064667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:22.385 17:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63383 00:08:22.643 [2024-11-26 17:53:04.288145] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.021 17:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:24.021 00:08:24.021 real 0m6.426s 00:08:24.021 user 0m9.764s 00:08:24.021 sys 0m1.061s 00:08:24.021 17:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.021 17:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.021 ************************************ 00:08:24.021 END TEST raid_superblock_test 00:08:24.021 ************************************ 00:08:24.021 17:53:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:24.021 17:53:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:24.021 17:53:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.021 17:53:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.021 ************************************ 00:08:24.021 START TEST raid_read_error_test 00:08:24.021 ************************************ 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VQhxVOrQbx 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63713 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63713 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63713 ']' 00:08:24.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.021 17:53:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.021 [2024-11-26 17:53:05.723678] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:08:24.021 [2024-11-26 17:53:05.723800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63713 ] 00:08:24.280 [2024-11-26 17:53:05.901679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.280 [2024-11-26 17:53:06.027984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.539 [2024-11-26 17:53:06.248176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.540 [2024-11-26 17:53:06.248250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.799 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.799 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:24.799 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:24.799 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:24.799 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.799 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.799 BaseBdev1_malloc 00:08:24.799 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.799 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:24.799 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.799 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.799 true 00:08:24.799 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.799 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:24.799 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.799 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.799 [2024-11-26 17:53:06.656581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:24.799 [2024-11-26 17:53:06.656656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.799 [2024-11-26 17:53:06.656680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:24.799 [2024-11-26 17:53:06.656692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.799 [2024-11-26 17:53:06.659181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.799 [2024-11-26 17:53:06.659230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:25.058 BaseBdev1 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.058 BaseBdev2_malloc 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.058 true 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.058 [2024-11-26 17:53:06.727422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:25.058 [2024-11-26 17:53:06.727503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.058 [2024-11-26 17:53:06.727524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:25.058 [2024-11-26 17:53:06.727535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.058 [2024-11-26 17:53:06.729970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.058 [2024-11-26 17:53:06.730099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:25.058 BaseBdev2 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.058 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.058 [2024-11-26 17:53:06.739452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.058 [2024-11-26 17:53:06.741491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.058 [2024-11-26 17:53:06.741833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:25.059 [2024-11-26 17:53:06.741858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:25.059 [2024-11-26 17:53:06.742185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:25.059 [2024-11-26 17:53:06.742406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:25.059 [2024-11-26 17:53:06.742418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:25.059 [2024-11-26 17:53:06.742621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.059 "name": "raid_bdev1", 00:08:25.059 "uuid": "da7afe36-80d1-4284-be5b-a057202bfb72", 00:08:25.059 "strip_size_kb": 0, 00:08:25.059 "state": "online", 00:08:25.059 "raid_level": "raid1", 00:08:25.059 "superblock": true, 00:08:25.059 "num_base_bdevs": 2, 00:08:25.059 "num_base_bdevs_discovered": 2, 00:08:25.059 "num_base_bdevs_operational": 2, 00:08:25.059 "base_bdevs_list": [ 00:08:25.059 { 00:08:25.059 "name": "BaseBdev1", 00:08:25.059 "uuid": "6ae890ba-a0e7-5060-9df2-1258372a7d7c", 00:08:25.059 "is_configured": true, 00:08:25.059 "data_offset": 2048, 00:08:25.059 "data_size": 63488 00:08:25.059 }, 00:08:25.059 { 00:08:25.059 "name": "BaseBdev2", 00:08:25.059 "uuid": "229073fb-f6b0-513e-8e17-aa67d01dc5fd", 00:08:25.059 "is_configured": true, 00:08:25.059 "data_offset": 2048, 00:08:25.059 "data_size": 63488 00:08:25.059 } 00:08:25.059 ] 00:08:25.059 }' 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.059 17:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.626 17:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:25.626 17:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:25.626 [2024-11-26 17:53:07.279953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.565 "name": "raid_bdev1", 00:08:26.565 "uuid": "da7afe36-80d1-4284-be5b-a057202bfb72", 00:08:26.565 "strip_size_kb": 0, 00:08:26.565 "state": "online", 00:08:26.565 "raid_level": "raid1", 00:08:26.565 "superblock": true, 00:08:26.565 "num_base_bdevs": 2, 00:08:26.565 "num_base_bdevs_discovered": 2, 00:08:26.565 "num_base_bdevs_operational": 2, 00:08:26.565 "base_bdevs_list": [ 00:08:26.565 { 00:08:26.565 "name": "BaseBdev1", 00:08:26.565 "uuid": "6ae890ba-a0e7-5060-9df2-1258372a7d7c", 00:08:26.565 "is_configured": true, 00:08:26.565 "data_offset": 2048, 00:08:26.565 "data_size": 63488 00:08:26.565 }, 00:08:26.565 { 00:08:26.565 "name": "BaseBdev2", 00:08:26.565 "uuid": "229073fb-f6b0-513e-8e17-aa67d01dc5fd", 00:08:26.565 "is_configured": true, 00:08:26.565 "data_offset": 2048, 00:08:26.565 "data_size": 63488 00:08:26.565 } 00:08:26.565 ] 00:08:26.565 }' 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.565 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.135 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:27.135 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.135 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.135 [2024-11-26 17:53:08.709226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.135 [2024-11-26 17:53:08.709381] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.135 [2024-11-26 17:53:08.712778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.135 [2024-11-26 17:53:08.712931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.135 [2024-11-26 17:53:08.713139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.135 [2024-11-26 17:53:08.713209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:27.135 { 00:08:27.135 "results": [ 00:08:27.135 { 00:08:27.135 "job": "raid_bdev1", 00:08:27.135 "core_mask": "0x1", 00:08:27.135 "workload": "randrw", 00:08:27.135 "percentage": 50, 00:08:27.135 "status": "finished", 00:08:27.135 "queue_depth": 1, 00:08:27.135 "io_size": 131072, 00:08:27.135 "runtime": 1.430273, 00:08:27.135 "iops": 15938.2159909332, 00:08:27.135 "mibps": 1992.27699886665, 00:08:27.135 "io_failed": 0, 00:08:27.135 "io_timeout": 0, 00:08:27.135 "avg_latency_us": 59.71903750830415, 00:08:27.135 "min_latency_us": 24.258515283842794, 00:08:27.135 "max_latency_us": 1760.0279475982534 00:08:27.135 } 00:08:27.135 ], 00:08:27.135 "core_count": 1 00:08:27.135 } 00:08:27.135 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.135 17:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63713 00:08:27.135 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63713 ']' 00:08:27.135 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63713 00:08:27.135 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:27.135 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.135 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63713 00:08:27.135 killing process with pid 63713 00:08:27.135 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.135 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.135 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63713' 00:08:27.135 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63713 00:08:27.135 17:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63713 00:08:27.135 [2024-11-26 17:53:08.753568] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.135 [2024-11-26 17:53:08.905701] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.515 17:53:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VQhxVOrQbx 00:08:28.515 17:53:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:28.515 17:53:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:28.515 17:53:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:28.515 ************************************ 00:08:28.515 END TEST raid_read_error_test 00:08:28.515 ************************************ 00:08:28.515 17:53:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:28.515 17:53:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.515 17:53:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:28.515 17:53:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:28.515 00:08:28.515 real 0m4.652s 00:08:28.515 user 0m5.529s 00:08:28.515 sys 0m0.609s 00:08:28.515 17:53:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.515 17:53:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.515 17:53:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:28.515 17:53:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:28.515 17:53:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.515 17:53:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.515 ************************************ 00:08:28.515 START TEST raid_write_error_test 00:08:28.515 ************************************ 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZozcOvB45m 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63864 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63864 00:08:28.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63864 ']' 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:28.515 17:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.775 [2024-11-26 17:53:10.431047] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:08:28.775 [2024-11-26 17:53:10.431178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63864 ] 00:08:28.775 [2024-11-26 17:53:10.610526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.034 [2024-11-26 17:53:10.739813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.295 [2024-11-26 17:53:10.966822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.295 [2024-11-26 17:53:10.966981] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.555 BaseBdev1_malloc 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.555 true 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.555 [2024-11-26 17:53:11.392151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:29.555 [2024-11-26 17:53:11.392217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.555 [2024-11-26 17:53:11.392244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:29.555 [2024-11-26 17:53:11.392256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.555 [2024-11-26 17:53:11.394786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.555 [2024-11-26 17:53:11.394897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:29.555 BaseBdev1 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.555 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.814 BaseBdev2_malloc 00:08:29.814 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.814 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:29.814 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.814 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.814 true 00:08:29.814 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.814 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:29.814 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.814 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.814 [2024-11-26 17:53:11.453583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:29.814 [2024-11-26 17:53:11.453660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.814 [2024-11-26 17:53:11.453683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:29.814 [2024-11-26 17:53:11.453697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.814 [2024-11-26 17:53:11.456238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.814 [2024-11-26 17:53:11.456338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:29.814 BaseBdev2 00:08:29.814 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.814 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:29.814 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.814 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.814 [2024-11-26 17:53:11.461635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.814 [2024-11-26 17:53:11.463784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.814 [2024-11-26 17:53:11.464051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:29.815 [2024-11-26 17:53:11.464071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:29.815 [2024-11-26 17:53:11.464384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:29.815 [2024-11-26 17:53:11.464603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:29.815 [2024-11-26 17:53:11.464616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:29.815 [2024-11-26 17:53:11.464825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.815 "name": "raid_bdev1", 00:08:29.815 "uuid": "efc9d73c-c65a-4406-bcd2-3561af071ae8", 00:08:29.815 "strip_size_kb": 0, 00:08:29.815 "state": "online", 00:08:29.815 "raid_level": "raid1", 00:08:29.815 "superblock": true, 00:08:29.815 "num_base_bdevs": 2, 00:08:29.815 "num_base_bdevs_discovered": 2, 00:08:29.815 "num_base_bdevs_operational": 2, 00:08:29.815 "base_bdevs_list": [ 00:08:29.815 { 00:08:29.815 "name": "BaseBdev1", 00:08:29.815 "uuid": "3af75eec-6af2-5d91-8b1a-2e700a0a39e0", 00:08:29.815 "is_configured": true, 00:08:29.815 "data_offset": 2048, 00:08:29.815 "data_size": 63488 00:08:29.815 }, 00:08:29.815 { 00:08:29.815 "name": "BaseBdev2", 00:08:29.815 "uuid": "677bcd2c-0033-56c2-8110-6585b3c67ef8", 00:08:29.815 "is_configured": true, 00:08:29.815 "data_offset": 2048, 00:08:29.815 "data_size": 63488 00:08:29.815 } 00:08:29.815 ] 00:08:29.815 }' 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.815 17:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.073 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:30.073 17:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:30.331 [2024-11-26 17:53:12.006466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:31.266 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:31.266 17:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.266 17:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.266 [2024-11-26 17:53:12.915851] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:31.266 [2024-11-26 17:53:12.915933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:31.266 [2024-11-26 17:53:12.916147] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:31.266 17:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.266 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:31.266 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:31.266 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:31.266 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:31.266 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.267 "name": "raid_bdev1", 00:08:31.267 "uuid": "efc9d73c-c65a-4406-bcd2-3561af071ae8", 00:08:31.267 "strip_size_kb": 0, 00:08:31.267 "state": "online", 00:08:31.267 "raid_level": "raid1", 00:08:31.267 "superblock": true, 00:08:31.267 "num_base_bdevs": 2, 00:08:31.267 "num_base_bdevs_discovered": 1, 00:08:31.267 "num_base_bdevs_operational": 1, 00:08:31.267 "base_bdevs_list": [ 00:08:31.267 { 00:08:31.267 "name": null, 00:08:31.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.267 "is_configured": false, 00:08:31.267 "data_offset": 0, 00:08:31.267 "data_size": 63488 00:08:31.267 }, 00:08:31.267 { 00:08:31.267 "name": "BaseBdev2", 00:08:31.267 "uuid": "677bcd2c-0033-56c2-8110-6585b3c67ef8", 00:08:31.267 "is_configured": true, 00:08:31.267 "data_offset": 2048, 00:08:31.267 "data_size": 63488 00:08:31.267 } 00:08:31.267 ] 00:08:31.267 }' 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.267 17:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.834 17:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:31.834 17:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.834 17:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.834 [2024-11-26 17:53:13.410693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:31.834 [2024-11-26 17:53:13.410812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.834 [2024-11-26 17:53:13.414109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.834 [2024-11-26 17:53:13.414237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.834 [2024-11-26 17:53:13.414332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.834 [2024-11-26 17:53:13.414398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:31.834 { 00:08:31.834 "results": [ 00:08:31.834 { 00:08:31.834 "job": "raid_bdev1", 00:08:31.834 "core_mask": "0x1", 00:08:31.834 "workload": "randrw", 00:08:31.834 "percentage": 50, 00:08:31.834 "status": "finished", 00:08:31.834 "queue_depth": 1, 00:08:31.834 "io_size": 131072, 00:08:31.834 "runtime": 1.405035, 00:08:31.834 "iops": 16746.20205190618, 00:08:31.834 "mibps": 2093.2752564882726, 00:08:31.834 "io_failed": 0, 00:08:31.834 "io_timeout": 0, 00:08:31.834 "avg_latency_us": 56.40445830946146, 00:08:31.834 "min_latency_us": 25.9353711790393, 00:08:31.834 "max_latency_us": 1731.4096069868995 00:08:31.834 } 00:08:31.834 ], 00:08:31.834 "core_count": 1 00:08:31.834 } 00:08:31.834 17:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.834 17:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63864 00:08:31.834 17:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63864 ']' 00:08:31.834 17:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63864 00:08:31.834 17:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:31.834 17:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.834 17:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63864 00:08:31.834 17:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:31.834 17:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:31.834 killing process with pid 63864 00:08:31.834 17:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63864' 00:08:31.834 17:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63864 00:08:31.834 17:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63864 00:08:31.834 [2024-11-26 17:53:13.463919] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.834 [2024-11-26 17:53:13.626583] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.211 17:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZozcOvB45m 00:08:33.211 17:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:33.211 17:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:33.211 17:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:33.211 17:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:33.211 ************************************ 00:08:33.211 END TEST raid_write_error_test 00:08:33.211 ************************************ 00:08:33.211 17:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.211 17:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:33.211 17:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:33.211 00:08:33.211 real 0m4.728s 00:08:33.211 user 0m5.670s 00:08:33.211 sys 0m0.568s 00:08:33.211 17:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.211 17:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.472 17:53:15 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:33.472 17:53:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:33.472 17:53:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:33.472 17:53:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:33.472 17:53:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.472 17:53:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.472 ************************************ 00:08:33.472 START TEST raid_state_function_test 00:08:33.472 ************************************ 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.472 Process raid pid: 64002 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64002 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64002' 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64002 00:08:33.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64002 ']' 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.472 17:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.472 [2024-11-26 17:53:15.203548] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:08:33.472 [2024-11-26 17:53:15.203753] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.736 [2024-11-26 17:53:15.368142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.736 [2024-11-26 17:53:15.506519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.996 [2024-11-26 17:53:15.753642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.996 [2024-11-26 17:53:15.753793] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.589 [2024-11-26 17:53:16.176871] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.589 [2024-11-26 17:53:16.176937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.589 [2024-11-26 17:53:16.176950] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.589 [2024-11-26 17:53:16.176962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.589 [2024-11-26 17:53:16.176970] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.589 [2024-11-26 17:53:16.176980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.589 "name": "Existed_Raid", 00:08:34.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.589 "strip_size_kb": 64, 00:08:34.589 "state": "configuring", 00:08:34.589 "raid_level": "raid0", 00:08:34.589 "superblock": false, 00:08:34.589 "num_base_bdevs": 3, 00:08:34.589 "num_base_bdevs_discovered": 0, 00:08:34.589 "num_base_bdevs_operational": 3, 00:08:34.589 "base_bdevs_list": [ 00:08:34.589 { 00:08:34.589 "name": "BaseBdev1", 00:08:34.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.589 "is_configured": false, 00:08:34.589 "data_offset": 0, 00:08:34.589 "data_size": 0 00:08:34.589 }, 00:08:34.589 { 00:08:34.589 "name": "BaseBdev2", 00:08:34.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.589 "is_configured": false, 00:08:34.589 "data_offset": 0, 00:08:34.589 "data_size": 0 00:08:34.589 }, 00:08:34.589 { 00:08:34.589 "name": "BaseBdev3", 00:08:34.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.589 "is_configured": false, 00:08:34.589 "data_offset": 0, 00:08:34.589 "data_size": 0 00:08:34.589 } 00:08:34.589 ] 00:08:34.589 }' 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.589 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.848 [2024-11-26 17:53:16.628060] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.848 [2024-11-26 17:53:16.628169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.848 [2024-11-26 17:53:16.640072] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.848 [2024-11-26 17:53:16.640131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.848 [2024-11-26 17:53:16.640143] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.848 [2024-11-26 17:53:16.640154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.848 [2024-11-26 17:53:16.640162] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.848 [2024-11-26 17:53:16.640172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.848 [2024-11-26 17:53:16.695873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.848 BaseBdev1 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.848 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.107 [ 00:08:35.107 { 00:08:35.107 "name": "BaseBdev1", 00:08:35.107 "aliases": [ 00:08:35.107 "446f9276-e965-49ae-a995-0e612670bbca" 00:08:35.107 ], 00:08:35.107 "product_name": "Malloc disk", 00:08:35.107 "block_size": 512, 00:08:35.107 "num_blocks": 65536, 00:08:35.107 "uuid": "446f9276-e965-49ae-a995-0e612670bbca", 00:08:35.107 "assigned_rate_limits": { 00:08:35.107 "rw_ios_per_sec": 0, 00:08:35.107 "rw_mbytes_per_sec": 0, 00:08:35.107 "r_mbytes_per_sec": 0, 00:08:35.107 "w_mbytes_per_sec": 0 00:08:35.107 }, 00:08:35.107 "claimed": true, 00:08:35.107 "claim_type": "exclusive_write", 00:08:35.107 "zoned": false, 00:08:35.107 "supported_io_types": { 00:08:35.107 "read": true, 00:08:35.107 "write": true, 00:08:35.107 "unmap": true, 00:08:35.107 "flush": true, 00:08:35.107 "reset": true, 00:08:35.107 "nvme_admin": false, 00:08:35.107 "nvme_io": false, 00:08:35.107 "nvme_io_md": false, 00:08:35.107 "write_zeroes": true, 00:08:35.107 "zcopy": true, 00:08:35.107 "get_zone_info": false, 00:08:35.107 "zone_management": false, 00:08:35.107 "zone_append": false, 00:08:35.107 "compare": false, 00:08:35.107 "compare_and_write": false, 00:08:35.107 "abort": true, 00:08:35.107 "seek_hole": false, 00:08:35.107 "seek_data": false, 00:08:35.107 "copy": true, 00:08:35.107 "nvme_iov_md": false 00:08:35.107 }, 00:08:35.107 "memory_domains": [ 00:08:35.107 { 00:08:35.107 "dma_device_id": "system", 00:08:35.107 "dma_device_type": 1 00:08:35.107 }, 00:08:35.107 { 00:08:35.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.107 "dma_device_type": 2 00:08:35.107 } 00:08:35.107 ], 00:08:35.107 "driver_specific": {} 00:08:35.107 } 00:08:35.107 ] 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.107 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.107 "name": "Existed_Raid", 00:08:35.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.108 "strip_size_kb": 64, 00:08:35.108 "state": "configuring", 00:08:35.108 "raid_level": "raid0", 00:08:35.108 "superblock": false, 00:08:35.108 "num_base_bdevs": 3, 00:08:35.108 "num_base_bdevs_discovered": 1, 00:08:35.108 "num_base_bdevs_operational": 3, 00:08:35.108 "base_bdevs_list": [ 00:08:35.108 { 00:08:35.108 "name": "BaseBdev1", 00:08:35.108 "uuid": "446f9276-e965-49ae-a995-0e612670bbca", 00:08:35.108 "is_configured": true, 00:08:35.108 "data_offset": 0, 00:08:35.108 "data_size": 65536 00:08:35.108 }, 00:08:35.108 { 00:08:35.108 "name": "BaseBdev2", 00:08:35.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.108 "is_configured": false, 00:08:35.108 "data_offset": 0, 00:08:35.108 "data_size": 0 00:08:35.108 }, 00:08:35.108 { 00:08:35.108 "name": "BaseBdev3", 00:08:35.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.108 "is_configured": false, 00:08:35.108 "data_offset": 0, 00:08:35.108 "data_size": 0 00:08:35.108 } 00:08:35.108 ] 00:08:35.108 }' 00:08:35.108 17:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.108 17:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.367 [2024-11-26 17:53:17.187213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.367 [2024-11-26 17:53:17.187287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.367 [2024-11-26 17:53:17.199289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.367 [2024-11-26 17:53:17.201554] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.367 [2024-11-26 17:53:17.201665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.367 [2024-11-26 17:53:17.201707] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:35.367 [2024-11-26 17:53:17.201736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.367 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.625 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.625 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.625 "name": "Existed_Raid", 00:08:35.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.625 "strip_size_kb": 64, 00:08:35.625 "state": "configuring", 00:08:35.625 "raid_level": "raid0", 00:08:35.625 "superblock": false, 00:08:35.625 "num_base_bdevs": 3, 00:08:35.625 "num_base_bdevs_discovered": 1, 00:08:35.625 "num_base_bdevs_operational": 3, 00:08:35.625 "base_bdevs_list": [ 00:08:35.625 { 00:08:35.625 "name": "BaseBdev1", 00:08:35.625 "uuid": "446f9276-e965-49ae-a995-0e612670bbca", 00:08:35.625 "is_configured": true, 00:08:35.625 "data_offset": 0, 00:08:35.625 "data_size": 65536 00:08:35.625 }, 00:08:35.625 { 00:08:35.625 "name": "BaseBdev2", 00:08:35.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.625 "is_configured": false, 00:08:35.625 "data_offset": 0, 00:08:35.625 "data_size": 0 00:08:35.625 }, 00:08:35.625 { 00:08:35.625 "name": "BaseBdev3", 00:08:35.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.625 "is_configured": false, 00:08:35.625 "data_offset": 0, 00:08:35.625 "data_size": 0 00:08:35.625 } 00:08:35.625 ] 00:08:35.625 }' 00:08:35.625 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.625 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.882 [2024-11-26 17:53:17.677877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.882 BaseBdev2 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.882 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.882 [ 00:08:35.882 { 00:08:35.882 "name": "BaseBdev2", 00:08:35.882 "aliases": [ 00:08:35.882 "44d1c28d-69a9-4cfe-9661-744712dc633b" 00:08:35.882 ], 00:08:35.882 "product_name": "Malloc disk", 00:08:35.882 "block_size": 512, 00:08:35.882 "num_blocks": 65536, 00:08:35.882 "uuid": "44d1c28d-69a9-4cfe-9661-744712dc633b", 00:08:35.882 "assigned_rate_limits": { 00:08:35.882 "rw_ios_per_sec": 0, 00:08:35.882 "rw_mbytes_per_sec": 0, 00:08:35.882 "r_mbytes_per_sec": 0, 00:08:35.882 "w_mbytes_per_sec": 0 00:08:35.882 }, 00:08:35.882 "claimed": true, 00:08:35.882 "claim_type": "exclusive_write", 00:08:35.882 "zoned": false, 00:08:35.882 "supported_io_types": { 00:08:35.882 "read": true, 00:08:35.882 "write": true, 00:08:35.882 "unmap": true, 00:08:35.882 "flush": true, 00:08:35.882 "reset": true, 00:08:35.882 "nvme_admin": false, 00:08:35.882 "nvme_io": false, 00:08:35.882 "nvme_io_md": false, 00:08:35.882 "write_zeroes": true, 00:08:35.882 "zcopy": true, 00:08:35.882 "get_zone_info": false, 00:08:35.882 "zone_management": false, 00:08:35.882 "zone_append": false, 00:08:35.882 "compare": false, 00:08:35.882 "compare_and_write": false, 00:08:35.882 "abort": true, 00:08:35.883 "seek_hole": false, 00:08:35.883 "seek_data": false, 00:08:35.883 "copy": true, 00:08:35.883 "nvme_iov_md": false 00:08:35.883 }, 00:08:35.883 "memory_domains": [ 00:08:35.883 { 00:08:35.883 "dma_device_id": "system", 00:08:35.883 "dma_device_type": 1 00:08:35.883 }, 00:08:35.883 { 00:08:35.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.883 "dma_device_type": 2 00:08:35.883 } 00:08:35.883 ], 00:08:35.883 "driver_specific": {} 00:08:35.883 } 00:08:35.883 ] 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.883 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.141 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.141 "name": "Existed_Raid", 00:08:36.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.141 "strip_size_kb": 64, 00:08:36.141 "state": "configuring", 00:08:36.141 "raid_level": "raid0", 00:08:36.141 "superblock": false, 00:08:36.141 "num_base_bdevs": 3, 00:08:36.141 "num_base_bdevs_discovered": 2, 00:08:36.141 "num_base_bdevs_operational": 3, 00:08:36.141 "base_bdevs_list": [ 00:08:36.141 { 00:08:36.141 "name": "BaseBdev1", 00:08:36.141 "uuid": "446f9276-e965-49ae-a995-0e612670bbca", 00:08:36.141 "is_configured": true, 00:08:36.141 "data_offset": 0, 00:08:36.141 "data_size": 65536 00:08:36.141 }, 00:08:36.141 { 00:08:36.141 "name": "BaseBdev2", 00:08:36.141 "uuid": "44d1c28d-69a9-4cfe-9661-744712dc633b", 00:08:36.141 "is_configured": true, 00:08:36.141 "data_offset": 0, 00:08:36.141 "data_size": 65536 00:08:36.141 }, 00:08:36.141 { 00:08:36.141 "name": "BaseBdev3", 00:08:36.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.141 "is_configured": false, 00:08:36.141 "data_offset": 0, 00:08:36.141 "data_size": 0 00:08:36.141 } 00:08:36.141 ] 00:08:36.141 }' 00:08:36.141 17:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.142 17:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.401 [2024-11-26 17:53:18.241882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.401 [2024-11-26 17:53:18.242055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:36.401 [2024-11-26 17:53:18.242095] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:36.401 [2024-11-26 17:53:18.242432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:36.401 [2024-11-26 17:53:18.242669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:36.401 [2024-11-26 17:53:18.242716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:36.401 [2024-11-26 17:53:18.243085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.401 BaseBdev3 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.401 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.401 [ 00:08:36.401 { 00:08:36.401 "name": "BaseBdev3", 00:08:36.660 "aliases": [ 00:08:36.660 "59502085-6a24-4c54-b0b0-b39e8eb82b29" 00:08:36.660 ], 00:08:36.660 "product_name": "Malloc disk", 00:08:36.660 "block_size": 512, 00:08:36.660 "num_blocks": 65536, 00:08:36.660 "uuid": "59502085-6a24-4c54-b0b0-b39e8eb82b29", 00:08:36.660 "assigned_rate_limits": { 00:08:36.660 "rw_ios_per_sec": 0, 00:08:36.660 "rw_mbytes_per_sec": 0, 00:08:36.660 "r_mbytes_per_sec": 0, 00:08:36.660 "w_mbytes_per_sec": 0 00:08:36.660 }, 00:08:36.660 "claimed": true, 00:08:36.660 "claim_type": "exclusive_write", 00:08:36.660 "zoned": false, 00:08:36.660 "supported_io_types": { 00:08:36.660 "read": true, 00:08:36.660 "write": true, 00:08:36.660 "unmap": true, 00:08:36.660 "flush": true, 00:08:36.660 "reset": true, 00:08:36.660 "nvme_admin": false, 00:08:36.660 "nvme_io": false, 00:08:36.660 "nvme_io_md": false, 00:08:36.660 "write_zeroes": true, 00:08:36.660 "zcopy": true, 00:08:36.660 "get_zone_info": false, 00:08:36.660 "zone_management": false, 00:08:36.660 "zone_append": false, 00:08:36.660 "compare": false, 00:08:36.660 "compare_and_write": false, 00:08:36.660 "abort": true, 00:08:36.660 "seek_hole": false, 00:08:36.660 "seek_data": false, 00:08:36.660 "copy": true, 00:08:36.660 "nvme_iov_md": false 00:08:36.660 }, 00:08:36.660 "memory_domains": [ 00:08:36.660 { 00:08:36.660 "dma_device_id": "system", 00:08:36.660 "dma_device_type": 1 00:08:36.660 }, 00:08:36.660 { 00:08:36.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.660 "dma_device_type": 2 00:08:36.660 } 00:08:36.660 ], 00:08:36.660 "driver_specific": {} 00:08:36.660 } 00:08:36.660 ] 00:08:36.660 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.660 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:36.660 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:36.660 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:36.660 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:36.660 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.660 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.660 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.660 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.660 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.660 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.660 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.660 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.660 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.661 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.661 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.661 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.661 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.661 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.661 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.661 "name": "Existed_Raid", 00:08:36.661 "uuid": "5cd47f0c-0dd7-4631-8633-af7995ec5a37", 00:08:36.661 "strip_size_kb": 64, 00:08:36.661 "state": "online", 00:08:36.661 "raid_level": "raid0", 00:08:36.661 "superblock": false, 00:08:36.661 "num_base_bdevs": 3, 00:08:36.661 "num_base_bdevs_discovered": 3, 00:08:36.661 "num_base_bdevs_operational": 3, 00:08:36.661 "base_bdevs_list": [ 00:08:36.661 { 00:08:36.661 "name": "BaseBdev1", 00:08:36.661 "uuid": "446f9276-e965-49ae-a995-0e612670bbca", 00:08:36.661 "is_configured": true, 00:08:36.661 "data_offset": 0, 00:08:36.661 "data_size": 65536 00:08:36.661 }, 00:08:36.661 { 00:08:36.661 "name": "BaseBdev2", 00:08:36.661 "uuid": "44d1c28d-69a9-4cfe-9661-744712dc633b", 00:08:36.661 "is_configured": true, 00:08:36.661 "data_offset": 0, 00:08:36.661 "data_size": 65536 00:08:36.661 }, 00:08:36.661 { 00:08:36.661 "name": "BaseBdev3", 00:08:36.661 "uuid": "59502085-6a24-4c54-b0b0-b39e8eb82b29", 00:08:36.661 "is_configured": true, 00:08:36.661 "data_offset": 0, 00:08:36.661 "data_size": 65536 00:08:36.661 } 00:08:36.661 ] 00:08:36.661 }' 00:08:36.661 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.661 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.921 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:36.921 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:36.921 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:36.921 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:36.921 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:36.921 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:36.921 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:36.921 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:36.921 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.921 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.921 [2024-11-26 17:53:18.713556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.921 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.921 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:36.921 "name": "Existed_Raid", 00:08:36.921 "aliases": [ 00:08:36.921 "5cd47f0c-0dd7-4631-8633-af7995ec5a37" 00:08:36.921 ], 00:08:36.921 "product_name": "Raid Volume", 00:08:36.921 "block_size": 512, 00:08:36.921 "num_blocks": 196608, 00:08:36.921 "uuid": "5cd47f0c-0dd7-4631-8633-af7995ec5a37", 00:08:36.921 "assigned_rate_limits": { 00:08:36.921 "rw_ios_per_sec": 0, 00:08:36.921 "rw_mbytes_per_sec": 0, 00:08:36.921 "r_mbytes_per_sec": 0, 00:08:36.921 "w_mbytes_per_sec": 0 00:08:36.921 }, 00:08:36.921 "claimed": false, 00:08:36.921 "zoned": false, 00:08:36.921 "supported_io_types": { 00:08:36.921 "read": true, 00:08:36.921 "write": true, 00:08:36.921 "unmap": true, 00:08:36.921 "flush": true, 00:08:36.921 "reset": true, 00:08:36.921 "nvme_admin": false, 00:08:36.921 "nvme_io": false, 00:08:36.921 "nvme_io_md": false, 00:08:36.921 "write_zeroes": true, 00:08:36.921 "zcopy": false, 00:08:36.921 "get_zone_info": false, 00:08:36.921 "zone_management": false, 00:08:36.921 "zone_append": false, 00:08:36.921 "compare": false, 00:08:36.921 "compare_and_write": false, 00:08:36.921 "abort": false, 00:08:36.921 "seek_hole": false, 00:08:36.921 "seek_data": false, 00:08:36.921 "copy": false, 00:08:36.921 "nvme_iov_md": false 00:08:36.921 }, 00:08:36.921 "memory_domains": [ 00:08:36.921 { 00:08:36.921 "dma_device_id": "system", 00:08:36.921 "dma_device_type": 1 00:08:36.921 }, 00:08:36.921 { 00:08:36.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.921 "dma_device_type": 2 00:08:36.921 }, 00:08:36.921 { 00:08:36.921 "dma_device_id": "system", 00:08:36.921 "dma_device_type": 1 00:08:36.921 }, 00:08:36.921 { 00:08:36.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.921 "dma_device_type": 2 00:08:36.921 }, 00:08:36.921 { 00:08:36.921 "dma_device_id": "system", 00:08:36.921 "dma_device_type": 1 00:08:36.921 }, 00:08:36.921 { 00:08:36.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.921 "dma_device_type": 2 00:08:36.921 } 00:08:36.921 ], 00:08:36.921 "driver_specific": { 00:08:36.921 "raid": { 00:08:36.921 "uuid": "5cd47f0c-0dd7-4631-8633-af7995ec5a37", 00:08:36.921 "strip_size_kb": 64, 00:08:36.921 "state": "online", 00:08:36.921 "raid_level": "raid0", 00:08:36.921 "superblock": false, 00:08:36.921 "num_base_bdevs": 3, 00:08:36.921 "num_base_bdevs_discovered": 3, 00:08:36.921 "num_base_bdevs_operational": 3, 00:08:36.921 "base_bdevs_list": [ 00:08:36.921 { 00:08:36.921 "name": "BaseBdev1", 00:08:36.921 "uuid": "446f9276-e965-49ae-a995-0e612670bbca", 00:08:36.921 "is_configured": true, 00:08:36.921 "data_offset": 0, 00:08:36.921 "data_size": 65536 00:08:36.921 }, 00:08:36.921 { 00:08:36.921 "name": "BaseBdev2", 00:08:36.921 "uuid": "44d1c28d-69a9-4cfe-9661-744712dc633b", 00:08:36.921 "is_configured": true, 00:08:36.921 "data_offset": 0, 00:08:36.921 "data_size": 65536 00:08:36.921 }, 00:08:36.921 { 00:08:36.921 "name": "BaseBdev3", 00:08:36.921 "uuid": "59502085-6a24-4c54-b0b0-b39e8eb82b29", 00:08:36.921 "is_configured": true, 00:08:36.921 "data_offset": 0, 00:08:36.921 "data_size": 65536 00:08:36.921 } 00:08:36.921 ] 00:08:36.921 } 00:08:36.921 } 00:08:36.921 }' 00:08:36.921 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:37.181 BaseBdev2 00:08:37.181 BaseBdev3' 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.181 17:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.181 [2024-11-26 17:53:18.972956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:37.181 [2024-11-26 17:53:18.973063] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.181 [2024-11-26 17:53:18.973165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.442 "name": "Existed_Raid", 00:08:37.442 "uuid": "5cd47f0c-0dd7-4631-8633-af7995ec5a37", 00:08:37.442 "strip_size_kb": 64, 00:08:37.442 "state": "offline", 00:08:37.442 "raid_level": "raid0", 00:08:37.442 "superblock": false, 00:08:37.442 "num_base_bdevs": 3, 00:08:37.442 "num_base_bdevs_discovered": 2, 00:08:37.442 "num_base_bdevs_operational": 2, 00:08:37.442 "base_bdevs_list": [ 00:08:37.442 { 00:08:37.442 "name": null, 00:08:37.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.442 "is_configured": false, 00:08:37.442 "data_offset": 0, 00:08:37.442 "data_size": 65536 00:08:37.442 }, 00:08:37.442 { 00:08:37.442 "name": "BaseBdev2", 00:08:37.442 "uuid": "44d1c28d-69a9-4cfe-9661-744712dc633b", 00:08:37.442 "is_configured": true, 00:08:37.442 "data_offset": 0, 00:08:37.442 "data_size": 65536 00:08:37.442 }, 00:08:37.442 { 00:08:37.442 "name": "BaseBdev3", 00:08:37.442 "uuid": "59502085-6a24-4c54-b0b0-b39e8eb82b29", 00:08:37.442 "is_configured": true, 00:08:37.442 "data_offset": 0, 00:08:37.442 "data_size": 65536 00:08:37.442 } 00:08:37.442 ] 00:08:37.442 }' 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.442 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.700 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:37.700 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.959 [2024-11-26 17:53:19.613459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.959 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.959 [2024-11-26 17:53:19.786368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:37.959 [2024-11-26 17:53:19.786492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.222 17:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.222 BaseBdev2 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.222 [ 00:08:38.222 { 00:08:38.222 "name": "BaseBdev2", 00:08:38.222 "aliases": [ 00:08:38.222 "dbdafd43-d294-460d-a91b-c0ebb2b99ab8" 00:08:38.222 ], 00:08:38.222 "product_name": "Malloc disk", 00:08:38.222 "block_size": 512, 00:08:38.222 "num_blocks": 65536, 00:08:38.222 "uuid": "dbdafd43-d294-460d-a91b-c0ebb2b99ab8", 00:08:38.222 "assigned_rate_limits": { 00:08:38.222 "rw_ios_per_sec": 0, 00:08:38.222 "rw_mbytes_per_sec": 0, 00:08:38.222 "r_mbytes_per_sec": 0, 00:08:38.222 "w_mbytes_per_sec": 0 00:08:38.222 }, 00:08:38.222 "claimed": false, 00:08:38.222 "zoned": false, 00:08:38.222 "supported_io_types": { 00:08:38.222 "read": true, 00:08:38.222 "write": true, 00:08:38.222 "unmap": true, 00:08:38.222 "flush": true, 00:08:38.222 "reset": true, 00:08:38.222 "nvme_admin": false, 00:08:38.222 "nvme_io": false, 00:08:38.222 "nvme_io_md": false, 00:08:38.222 "write_zeroes": true, 00:08:38.222 "zcopy": true, 00:08:38.222 "get_zone_info": false, 00:08:38.222 "zone_management": false, 00:08:38.222 "zone_append": false, 00:08:38.222 "compare": false, 00:08:38.222 "compare_and_write": false, 00:08:38.222 "abort": true, 00:08:38.222 "seek_hole": false, 00:08:38.222 "seek_data": false, 00:08:38.222 "copy": true, 00:08:38.222 "nvme_iov_md": false 00:08:38.222 }, 00:08:38.222 "memory_domains": [ 00:08:38.222 { 00:08:38.222 "dma_device_id": "system", 00:08:38.222 "dma_device_type": 1 00:08:38.222 }, 00:08:38.222 { 00:08:38.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.222 "dma_device_type": 2 00:08:38.222 } 00:08:38.222 ], 00:08:38.222 "driver_specific": {} 00:08:38.222 } 00:08:38.222 ] 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.222 BaseBdev3 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.222 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.482 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.482 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:38.482 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.482 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.482 [ 00:08:38.482 { 00:08:38.482 "name": "BaseBdev3", 00:08:38.482 "aliases": [ 00:08:38.482 "d4fcff32-da2c-417f-a8e2-b11cfce1f9c6" 00:08:38.482 ], 00:08:38.482 "product_name": "Malloc disk", 00:08:38.482 "block_size": 512, 00:08:38.482 "num_blocks": 65536, 00:08:38.482 "uuid": "d4fcff32-da2c-417f-a8e2-b11cfce1f9c6", 00:08:38.482 "assigned_rate_limits": { 00:08:38.482 "rw_ios_per_sec": 0, 00:08:38.482 "rw_mbytes_per_sec": 0, 00:08:38.482 "r_mbytes_per_sec": 0, 00:08:38.482 "w_mbytes_per_sec": 0 00:08:38.482 }, 00:08:38.482 "claimed": false, 00:08:38.482 "zoned": false, 00:08:38.482 "supported_io_types": { 00:08:38.482 "read": true, 00:08:38.482 "write": true, 00:08:38.482 "unmap": true, 00:08:38.482 "flush": true, 00:08:38.482 "reset": true, 00:08:38.482 "nvme_admin": false, 00:08:38.482 "nvme_io": false, 00:08:38.482 "nvme_io_md": false, 00:08:38.482 "write_zeroes": true, 00:08:38.482 "zcopy": true, 00:08:38.482 "get_zone_info": false, 00:08:38.483 "zone_management": false, 00:08:38.483 "zone_append": false, 00:08:38.483 "compare": false, 00:08:38.483 "compare_and_write": false, 00:08:38.483 "abort": true, 00:08:38.483 "seek_hole": false, 00:08:38.483 "seek_data": false, 00:08:38.483 "copy": true, 00:08:38.483 "nvme_iov_md": false 00:08:38.483 }, 00:08:38.483 "memory_domains": [ 00:08:38.483 { 00:08:38.483 "dma_device_id": "system", 00:08:38.483 "dma_device_type": 1 00:08:38.483 }, 00:08:38.483 { 00:08:38.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.483 "dma_device_type": 2 00:08:38.483 } 00:08:38.483 ], 00:08:38.483 "driver_specific": {} 00:08:38.483 } 00:08:38.483 ] 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.483 [2024-11-26 17:53:20.106363] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.483 [2024-11-26 17:53:20.106502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.483 [2024-11-26 17:53:20.106558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.483 [2024-11-26 17:53:20.108723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.483 "name": "Existed_Raid", 00:08:38.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.483 "strip_size_kb": 64, 00:08:38.483 "state": "configuring", 00:08:38.483 "raid_level": "raid0", 00:08:38.483 "superblock": false, 00:08:38.483 "num_base_bdevs": 3, 00:08:38.483 "num_base_bdevs_discovered": 2, 00:08:38.483 "num_base_bdevs_operational": 3, 00:08:38.483 "base_bdevs_list": [ 00:08:38.483 { 00:08:38.483 "name": "BaseBdev1", 00:08:38.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.483 "is_configured": false, 00:08:38.483 "data_offset": 0, 00:08:38.483 "data_size": 0 00:08:38.483 }, 00:08:38.483 { 00:08:38.483 "name": "BaseBdev2", 00:08:38.483 "uuid": "dbdafd43-d294-460d-a91b-c0ebb2b99ab8", 00:08:38.483 "is_configured": true, 00:08:38.483 "data_offset": 0, 00:08:38.483 "data_size": 65536 00:08:38.483 }, 00:08:38.483 { 00:08:38.483 "name": "BaseBdev3", 00:08:38.483 "uuid": "d4fcff32-da2c-417f-a8e2-b11cfce1f9c6", 00:08:38.483 "is_configured": true, 00:08:38.483 "data_offset": 0, 00:08:38.483 "data_size": 65536 00:08:38.483 } 00:08:38.483 ] 00:08:38.483 }' 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.483 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.742 [2024-11-26 17:53:20.529674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.742 "name": "Existed_Raid", 00:08:38.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.742 "strip_size_kb": 64, 00:08:38.742 "state": "configuring", 00:08:38.742 "raid_level": "raid0", 00:08:38.742 "superblock": false, 00:08:38.742 "num_base_bdevs": 3, 00:08:38.742 "num_base_bdevs_discovered": 1, 00:08:38.742 "num_base_bdevs_operational": 3, 00:08:38.742 "base_bdevs_list": [ 00:08:38.742 { 00:08:38.742 "name": "BaseBdev1", 00:08:38.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.742 "is_configured": false, 00:08:38.742 "data_offset": 0, 00:08:38.742 "data_size": 0 00:08:38.742 }, 00:08:38.742 { 00:08:38.742 "name": null, 00:08:38.742 "uuid": "dbdafd43-d294-460d-a91b-c0ebb2b99ab8", 00:08:38.742 "is_configured": false, 00:08:38.742 "data_offset": 0, 00:08:38.742 "data_size": 65536 00:08:38.742 }, 00:08:38.742 { 00:08:38.742 "name": "BaseBdev3", 00:08:38.742 "uuid": "d4fcff32-da2c-417f-a8e2-b11cfce1f9c6", 00:08:38.742 "is_configured": true, 00:08:38.742 "data_offset": 0, 00:08:38.742 "data_size": 65536 00:08:38.742 } 00:08:38.742 ] 00:08:38.742 }' 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.742 17:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.310 [2024-11-26 17:53:21.103972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.310 BaseBdev1 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.310 [ 00:08:39.310 { 00:08:39.310 "name": "BaseBdev1", 00:08:39.310 "aliases": [ 00:08:39.310 "fe12383f-085e-4974-a324-d359a809a353" 00:08:39.310 ], 00:08:39.310 "product_name": "Malloc disk", 00:08:39.310 "block_size": 512, 00:08:39.310 "num_blocks": 65536, 00:08:39.310 "uuid": "fe12383f-085e-4974-a324-d359a809a353", 00:08:39.310 "assigned_rate_limits": { 00:08:39.310 "rw_ios_per_sec": 0, 00:08:39.310 "rw_mbytes_per_sec": 0, 00:08:39.310 "r_mbytes_per_sec": 0, 00:08:39.310 "w_mbytes_per_sec": 0 00:08:39.310 }, 00:08:39.310 "claimed": true, 00:08:39.310 "claim_type": "exclusive_write", 00:08:39.310 "zoned": false, 00:08:39.310 "supported_io_types": { 00:08:39.310 "read": true, 00:08:39.310 "write": true, 00:08:39.310 "unmap": true, 00:08:39.310 "flush": true, 00:08:39.310 "reset": true, 00:08:39.310 "nvme_admin": false, 00:08:39.310 "nvme_io": false, 00:08:39.310 "nvme_io_md": false, 00:08:39.310 "write_zeroes": true, 00:08:39.310 "zcopy": true, 00:08:39.310 "get_zone_info": false, 00:08:39.310 "zone_management": false, 00:08:39.310 "zone_append": false, 00:08:39.310 "compare": false, 00:08:39.310 "compare_and_write": false, 00:08:39.310 "abort": true, 00:08:39.310 "seek_hole": false, 00:08:39.310 "seek_data": false, 00:08:39.310 "copy": true, 00:08:39.310 "nvme_iov_md": false 00:08:39.310 }, 00:08:39.310 "memory_domains": [ 00:08:39.310 { 00:08:39.310 "dma_device_id": "system", 00:08:39.310 "dma_device_type": 1 00:08:39.310 }, 00:08:39.310 { 00:08:39.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.310 "dma_device_type": 2 00:08:39.310 } 00:08:39.310 ], 00:08:39.310 "driver_specific": {} 00:08:39.310 } 00:08:39.310 ] 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.310 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.311 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.311 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.569 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.569 "name": "Existed_Raid", 00:08:39.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.569 "strip_size_kb": 64, 00:08:39.569 "state": "configuring", 00:08:39.569 "raid_level": "raid0", 00:08:39.569 "superblock": false, 00:08:39.569 "num_base_bdevs": 3, 00:08:39.569 "num_base_bdevs_discovered": 2, 00:08:39.569 "num_base_bdevs_operational": 3, 00:08:39.569 "base_bdevs_list": [ 00:08:39.569 { 00:08:39.569 "name": "BaseBdev1", 00:08:39.569 "uuid": "fe12383f-085e-4974-a324-d359a809a353", 00:08:39.569 "is_configured": true, 00:08:39.569 "data_offset": 0, 00:08:39.569 "data_size": 65536 00:08:39.569 }, 00:08:39.569 { 00:08:39.569 "name": null, 00:08:39.569 "uuid": "dbdafd43-d294-460d-a91b-c0ebb2b99ab8", 00:08:39.569 "is_configured": false, 00:08:39.569 "data_offset": 0, 00:08:39.569 "data_size": 65536 00:08:39.569 }, 00:08:39.569 { 00:08:39.569 "name": "BaseBdev3", 00:08:39.569 "uuid": "d4fcff32-da2c-417f-a8e2-b11cfce1f9c6", 00:08:39.569 "is_configured": true, 00:08:39.569 "data_offset": 0, 00:08:39.569 "data_size": 65536 00:08:39.569 } 00:08:39.569 ] 00:08:39.569 }' 00:08:39.569 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.569 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.827 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.827 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.827 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.827 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.828 [2024-11-26 17:53:21.655193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.828 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.087 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.087 "name": "Existed_Raid", 00:08:40.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.087 "strip_size_kb": 64, 00:08:40.087 "state": "configuring", 00:08:40.087 "raid_level": "raid0", 00:08:40.087 "superblock": false, 00:08:40.087 "num_base_bdevs": 3, 00:08:40.087 "num_base_bdevs_discovered": 1, 00:08:40.087 "num_base_bdevs_operational": 3, 00:08:40.087 "base_bdevs_list": [ 00:08:40.087 { 00:08:40.087 "name": "BaseBdev1", 00:08:40.087 "uuid": "fe12383f-085e-4974-a324-d359a809a353", 00:08:40.087 "is_configured": true, 00:08:40.087 "data_offset": 0, 00:08:40.087 "data_size": 65536 00:08:40.087 }, 00:08:40.087 { 00:08:40.087 "name": null, 00:08:40.087 "uuid": "dbdafd43-d294-460d-a91b-c0ebb2b99ab8", 00:08:40.087 "is_configured": false, 00:08:40.087 "data_offset": 0, 00:08:40.087 "data_size": 65536 00:08:40.087 }, 00:08:40.087 { 00:08:40.087 "name": null, 00:08:40.087 "uuid": "d4fcff32-da2c-417f-a8e2-b11cfce1f9c6", 00:08:40.087 "is_configured": false, 00:08:40.087 "data_offset": 0, 00:08:40.087 "data_size": 65536 00:08:40.087 } 00:08:40.087 ] 00:08:40.087 }' 00:08:40.087 17:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.087 17:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.346 [2024-11-26 17:53:22.134427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.346 "name": "Existed_Raid", 00:08:40.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.346 "strip_size_kb": 64, 00:08:40.346 "state": "configuring", 00:08:40.346 "raid_level": "raid0", 00:08:40.346 "superblock": false, 00:08:40.346 "num_base_bdevs": 3, 00:08:40.346 "num_base_bdevs_discovered": 2, 00:08:40.346 "num_base_bdevs_operational": 3, 00:08:40.346 "base_bdevs_list": [ 00:08:40.346 { 00:08:40.346 "name": "BaseBdev1", 00:08:40.346 "uuid": "fe12383f-085e-4974-a324-d359a809a353", 00:08:40.346 "is_configured": true, 00:08:40.346 "data_offset": 0, 00:08:40.346 "data_size": 65536 00:08:40.346 }, 00:08:40.346 { 00:08:40.346 "name": null, 00:08:40.346 "uuid": "dbdafd43-d294-460d-a91b-c0ebb2b99ab8", 00:08:40.346 "is_configured": false, 00:08:40.346 "data_offset": 0, 00:08:40.346 "data_size": 65536 00:08:40.346 }, 00:08:40.346 { 00:08:40.346 "name": "BaseBdev3", 00:08:40.346 "uuid": "d4fcff32-da2c-417f-a8e2-b11cfce1f9c6", 00:08:40.346 "is_configured": true, 00:08:40.346 "data_offset": 0, 00:08:40.346 "data_size": 65536 00:08:40.346 } 00:08:40.346 ] 00:08:40.346 }' 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.346 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.913 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.914 [2024-11-26 17:53:22.653582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.914 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.173 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.173 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.173 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.173 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.173 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.173 "name": "Existed_Raid", 00:08:41.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.173 "strip_size_kb": 64, 00:08:41.173 "state": "configuring", 00:08:41.173 "raid_level": "raid0", 00:08:41.173 "superblock": false, 00:08:41.173 "num_base_bdevs": 3, 00:08:41.173 "num_base_bdevs_discovered": 1, 00:08:41.173 "num_base_bdevs_operational": 3, 00:08:41.173 "base_bdevs_list": [ 00:08:41.173 { 00:08:41.173 "name": null, 00:08:41.173 "uuid": "fe12383f-085e-4974-a324-d359a809a353", 00:08:41.173 "is_configured": false, 00:08:41.173 "data_offset": 0, 00:08:41.173 "data_size": 65536 00:08:41.173 }, 00:08:41.173 { 00:08:41.173 "name": null, 00:08:41.173 "uuid": "dbdafd43-d294-460d-a91b-c0ebb2b99ab8", 00:08:41.173 "is_configured": false, 00:08:41.173 "data_offset": 0, 00:08:41.173 "data_size": 65536 00:08:41.173 }, 00:08:41.173 { 00:08:41.173 "name": "BaseBdev3", 00:08:41.173 "uuid": "d4fcff32-da2c-417f-a8e2-b11cfce1f9c6", 00:08:41.173 "is_configured": true, 00:08:41.173 "data_offset": 0, 00:08:41.173 "data_size": 65536 00:08:41.173 } 00:08:41.173 ] 00:08:41.173 }' 00:08:41.173 17:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.173 17:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.441 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.441 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:41.441 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.441 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.718 [2024-11-26 17:53:23.337074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.718 "name": "Existed_Raid", 00:08:41.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.718 "strip_size_kb": 64, 00:08:41.718 "state": "configuring", 00:08:41.718 "raid_level": "raid0", 00:08:41.718 "superblock": false, 00:08:41.718 "num_base_bdevs": 3, 00:08:41.718 "num_base_bdevs_discovered": 2, 00:08:41.718 "num_base_bdevs_operational": 3, 00:08:41.718 "base_bdevs_list": [ 00:08:41.718 { 00:08:41.718 "name": null, 00:08:41.718 "uuid": "fe12383f-085e-4974-a324-d359a809a353", 00:08:41.718 "is_configured": false, 00:08:41.718 "data_offset": 0, 00:08:41.718 "data_size": 65536 00:08:41.718 }, 00:08:41.718 { 00:08:41.718 "name": "BaseBdev2", 00:08:41.718 "uuid": "dbdafd43-d294-460d-a91b-c0ebb2b99ab8", 00:08:41.718 "is_configured": true, 00:08:41.718 "data_offset": 0, 00:08:41.718 "data_size": 65536 00:08:41.718 }, 00:08:41.718 { 00:08:41.718 "name": "BaseBdev3", 00:08:41.718 "uuid": "d4fcff32-da2c-417f-a8e2-b11cfce1f9c6", 00:08:41.718 "is_configured": true, 00:08:41.718 "data_offset": 0, 00:08:41.718 "data_size": 65536 00:08:41.718 } 00:08:41.718 ] 00:08:41.718 }' 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.718 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.976 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:41.976 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.976 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.976 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.976 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.976 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fe12383f-085e-4974-a324-d359a809a353 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.237 [2024-11-26 17:53:23.931689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:42.237 [2024-11-26 17:53:23.931862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:42.237 [2024-11-26 17:53:23.931880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:42.237 [2024-11-26 17:53:23.932224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:42.237 [2024-11-26 17:53:23.932429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:42.237 [2024-11-26 17:53:23.932440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:42.237 [2024-11-26 17:53:23.932761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.237 NewBaseBdev 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.237 [ 00:08:42.237 { 00:08:42.237 "name": "NewBaseBdev", 00:08:42.237 "aliases": [ 00:08:42.237 "fe12383f-085e-4974-a324-d359a809a353" 00:08:42.237 ], 00:08:42.237 "product_name": "Malloc disk", 00:08:42.237 "block_size": 512, 00:08:42.237 "num_blocks": 65536, 00:08:42.237 "uuid": "fe12383f-085e-4974-a324-d359a809a353", 00:08:42.237 "assigned_rate_limits": { 00:08:42.237 "rw_ios_per_sec": 0, 00:08:42.237 "rw_mbytes_per_sec": 0, 00:08:42.237 "r_mbytes_per_sec": 0, 00:08:42.237 "w_mbytes_per_sec": 0 00:08:42.237 }, 00:08:42.237 "claimed": true, 00:08:42.237 "claim_type": "exclusive_write", 00:08:42.237 "zoned": false, 00:08:42.237 "supported_io_types": { 00:08:42.237 "read": true, 00:08:42.237 "write": true, 00:08:42.237 "unmap": true, 00:08:42.237 "flush": true, 00:08:42.237 "reset": true, 00:08:42.237 "nvme_admin": false, 00:08:42.237 "nvme_io": false, 00:08:42.237 "nvme_io_md": false, 00:08:42.237 "write_zeroes": true, 00:08:42.237 "zcopy": true, 00:08:42.237 "get_zone_info": false, 00:08:42.237 "zone_management": false, 00:08:42.237 "zone_append": false, 00:08:42.237 "compare": false, 00:08:42.237 "compare_and_write": false, 00:08:42.237 "abort": true, 00:08:42.237 "seek_hole": false, 00:08:42.237 "seek_data": false, 00:08:42.237 "copy": true, 00:08:42.237 "nvme_iov_md": false 00:08:42.237 }, 00:08:42.237 "memory_domains": [ 00:08:42.237 { 00:08:42.237 "dma_device_id": "system", 00:08:42.237 "dma_device_type": 1 00:08:42.237 }, 00:08:42.237 { 00:08:42.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.237 "dma_device_type": 2 00:08:42.237 } 00:08:42.237 ], 00:08:42.237 "driver_specific": {} 00:08:42.237 } 00:08:42.237 ] 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.237 17:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.237 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.237 "name": "Existed_Raid", 00:08:42.237 "uuid": "4fb6124a-2946-42b8-ad2a-8b8f51314f33", 00:08:42.237 "strip_size_kb": 64, 00:08:42.237 "state": "online", 00:08:42.237 "raid_level": "raid0", 00:08:42.237 "superblock": false, 00:08:42.237 "num_base_bdevs": 3, 00:08:42.237 "num_base_bdevs_discovered": 3, 00:08:42.237 "num_base_bdevs_operational": 3, 00:08:42.237 "base_bdevs_list": [ 00:08:42.237 { 00:08:42.237 "name": "NewBaseBdev", 00:08:42.237 "uuid": "fe12383f-085e-4974-a324-d359a809a353", 00:08:42.237 "is_configured": true, 00:08:42.237 "data_offset": 0, 00:08:42.237 "data_size": 65536 00:08:42.237 }, 00:08:42.237 { 00:08:42.237 "name": "BaseBdev2", 00:08:42.237 "uuid": "dbdafd43-d294-460d-a91b-c0ebb2b99ab8", 00:08:42.237 "is_configured": true, 00:08:42.237 "data_offset": 0, 00:08:42.237 "data_size": 65536 00:08:42.237 }, 00:08:42.237 { 00:08:42.237 "name": "BaseBdev3", 00:08:42.237 "uuid": "d4fcff32-da2c-417f-a8e2-b11cfce1f9c6", 00:08:42.237 "is_configured": true, 00:08:42.237 "data_offset": 0, 00:08:42.237 "data_size": 65536 00:08:42.237 } 00:08:42.237 ] 00:08:42.237 }' 00:08:42.237 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.237 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.804 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:42.804 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:42.804 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.804 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.804 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.804 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.804 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:42.804 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.804 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.804 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.804 [2024-11-26 17:53:24.459304] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.804 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.804 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.804 "name": "Existed_Raid", 00:08:42.804 "aliases": [ 00:08:42.804 "4fb6124a-2946-42b8-ad2a-8b8f51314f33" 00:08:42.804 ], 00:08:42.804 "product_name": "Raid Volume", 00:08:42.804 "block_size": 512, 00:08:42.804 "num_blocks": 196608, 00:08:42.804 "uuid": "4fb6124a-2946-42b8-ad2a-8b8f51314f33", 00:08:42.804 "assigned_rate_limits": { 00:08:42.804 "rw_ios_per_sec": 0, 00:08:42.804 "rw_mbytes_per_sec": 0, 00:08:42.804 "r_mbytes_per_sec": 0, 00:08:42.804 "w_mbytes_per_sec": 0 00:08:42.804 }, 00:08:42.804 "claimed": false, 00:08:42.804 "zoned": false, 00:08:42.804 "supported_io_types": { 00:08:42.804 "read": true, 00:08:42.804 "write": true, 00:08:42.804 "unmap": true, 00:08:42.804 "flush": true, 00:08:42.804 "reset": true, 00:08:42.804 "nvme_admin": false, 00:08:42.804 "nvme_io": false, 00:08:42.804 "nvme_io_md": false, 00:08:42.804 "write_zeroes": true, 00:08:42.804 "zcopy": false, 00:08:42.804 "get_zone_info": false, 00:08:42.804 "zone_management": false, 00:08:42.804 "zone_append": false, 00:08:42.804 "compare": false, 00:08:42.805 "compare_and_write": false, 00:08:42.805 "abort": false, 00:08:42.805 "seek_hole": false, 00:08:42.805 "seek_data": false, 00:08:42.805 "copy": false, 00:08:42.805 "nvme_iov_md": false 00:08:42.805 }, 00:08:42.805 "memory_domains": [ 00:08:42.805 { 00:08:42.805 "dma_device_id": "system", 00:08:42.805 "dma_device_type": 1 00:08:42.805 }, 00:08:42.805 { 00:08:42.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.805 "dma_device_type": 2 00:08:42.805 }, 00:08:42.805 { 00:08:42.805 "dma_device_id": "system", 00:08:42.805 "dma_device_type": 1 00:08:42.805 }, 00:08:42.805 { 00:08:42.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.805 "dma_device_type": 2 00:08:42.805 }, 00:08:42.805 { 00:08:42.805 "dma_device_id": "system", 00:08:42.805 "dma_device_type": 1 00:08:42.805 }, 00:08:42.805 { 00:08:42.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.805 "dma_device_type": 2 00:08:42.805 } 00:08:42.805 ], 00:08:42.805 "driver_specific": { 00:08:42.805 "raid": { 00:08:42.805 "uuid": "4fb6124a-2946-42b8-ad2a-8b8f51314f33", 00:08:42.805 "strip_size_kb": 64, 00:08:42.805 "state": "online", 00:08:42.805 "raid_level": "raid0", 00:08:42.805 "superblock": false, 00:08:42.805 "num_base_bdevs": 3, 00:08:42.805 "num_base_bdevs_discovered": 3, 00:08:42.805 "num_base_bdevs_operational": 3, 00:08:42.805 "base_bdevs_list": [ 00:08:42.805 { 00:08:42.805 "name": "NewBaseBdev", 00:08:42.805 "uuid": "fe12383f-085e-4974-a324-d359a809a353", 00:08:42.805 "is_configured": true, 00:08:42.805 "data_offset": 0, 00:08:42.805 "data_size": 65536 00:08:42.805 }, 00:08:42.805 { 00:08:42.805 "name": "BaseBdev2", 00:08:42.805 "uuid": "dbdafd43-d294-460d-a91b-c0ebb2b99ab8", 00:08:42.805 "is_configured": true, 00:08:42.805 "data_offset": 0, 00:08:42.805 "data_size": 65536 00:08:42.805 }, 00:08:42.805 { 00:08:42.805 "name": "BaseBdev3", 00:08:42.805 "uuid": "d4fcff32-da2c-417f-a8e2-b11cfce1f9c6", 00:08:42.805 "is_configured": true, 00:08:42.805 "data_offset": 0, 00:08:42.805 "data_size": 65536 00:08:42.805 } 00:08:42.805 ] 00:08:42.805 } 00:08:42.805 } 00:08:42.805 }' 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:42.805 BaseBdev2 00:08:42.805 BaseBdev3' 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.805 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.065 [2024-11-26 17:53:24.746474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:43.065 [2024-11-26 17:53:24.746600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.065 [2024-11-26 17:53:24.746732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.065 [2024-11-26 17:53:24.746831] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.065 [2024-11-26 17:53:24.746878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64002 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64002 ']' 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64002 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64002 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.065 killing process with pid 64002 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64002' 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64002 00:08:43.065 [2024-11-26 17:53:24.799364] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:43.065 17:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64002 00:08:43.323 [2024-11-26 17:53:25.158736] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.719 ************************************ 00:08:44.719 END TEST raid_state_function_test 00:08:44.719 ************************************ 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:44.719 00:08:44.719 real 0m11.373s 00:08:44.719 user 0m18.040s 00:08:44.719 sys 0m1.798s 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.719 17:53:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:44.719 17:53:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:44.719 17:53:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.719 17:53:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.719 ************************************ 00:08:44.719 START TEST raid_state_function_test_sb 00:08:44.719 ************************************ 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64640 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64640' 00:08:44.719 Process raid pid: 64640 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64640 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64640 ']' 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.719 17:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.978 [2024-11-26 17:53:26.620416] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:08:44.978 [2024-11-26 17:53:26.620650] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.978 [2024-11-26 17:53:26.799155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.237 [2024-11-26 17:53:26.940690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.496 [2024-11-26 17:53:27.183601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.496 [2024-11-26 17:53:27.183755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.064 [2024-11-26 17:53:27.629695] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.064 [2024-11-26 17:53:27.629842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.064 [2024-11-26 17:53:27.629884] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.064 [2024-11-26 17:53:27.629921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.064 [2024-11-26 17:53:27.629961] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.064 [2024-11-26 17:53:27.629997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.064 "name": "Existed_Raid", 00:08:46.064 "uuid": "4dd22c94-2438-42a0-b0ff-02a92a399576", 00:08:46.064 "strip_size_kb": 64, 00:08:46.064 "state": "configuring", 00:08:46.064 "raid_level": "raid0", 00:08:46.064 "superblock": true, 00:08:46.064 "num_base_bdevs": 3, 00:08:46.064 "num_base_bdevs_discovered": 0, 00:08:46.064 "num_base_bdevs_operational": 3, 00:08:46.064 "base_bdevs_list": [ 00:08:46.064 { 00:08:46.064 "name": "BaseBdev1", 00:08:46.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.064 "is_configured": false, 00:08:46.064 "data_offset": 0, 00:08:46.064 "data_size": 0 00:08:46.064 }, 00:08:46.064 { 00:08:46.064 "name": "BaseBdev2", 00:08:46.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.064 "is_configured": false, 00:08:46.064 "data_offset": 0, 00:08:46.064 "data_size": 0 00:08:46.064 }, 00:08:46.064 { 00:08:46.064 "name": "BaseBdev3", 00:08:46.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.064 "is_configured": false, 00:08:46.064 "data_offset": 0, 00:08:46.064 "data_size": 0 00:08:46.064 } 00:08:46.064 ] 00:08:46.064 }' 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.064 17:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.323 [2024-11-26 17:53:28.108871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.323 [2024-11-26 17:53:28.109031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.323 [2024-11-26 17:53:28.116907] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.323 [2024-11-26 17:53:28.117035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.323 [2024-11-26 17:53:28.117172] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.323 [2024-11-26 17:53:28.117202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.323 [2024-11-26 17:53:28.117239] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.323 [2024-11-26 17:53:28.117267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.323 [2024-11-26 17:53:28.166526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.323 BaseBdev1 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.323 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.581 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.582 [ 00:08:46.582 { 00:08:46.582 "name": "BaseBdev1", 00:08:46.582 "aliases": [ 00:08:46.582 "411b6d99-9a40-4978-bba9-a906cf118cdf" 00:08:46.582 ], 00:08:46.582 "product_name": "Malloc disk", 00:08:46.582 "block_size": 512, 00:08:46.582 "num_blocks": 65536, 00:08:46.582 "uuid": "411b6d99-9a40-4978-bba9-a906cf118cdf", 00:08:46.582 "assigned_rate_limits": { 00:08:46.582 "rw_ios_per_sec": 0, 00:08:46.582 "rw_mbytes_per_sec": 0, 00:08:46.582 "r_mbytes_per_sec": 0, 00:08:46.582 "w_mbytes_per_sec": 0 00:08:46.582 }, 00:08:46.582 "claimed": true, 00:08:46.582 "claim_type": "exclusive_write", 00:08:46.582 "zoned": false, 00:08:46.582 "supported_io_types": { 00:08:46.582 "read": true, 00:08:46.582 "write": true, 00:08:46.582 "unmap": true, 00:08:46.582 "flush": true, 00:08:46.582 "reset": true, 00:08:46.582 "nvme_admin": false, 00:08:46.582 "nvme_io": false, 00:08:46.582 "nvme_io_md": false, 00:08:46.582 "write_zeroes": true, 00:08:46.582 "zcopy": true, 00:08:46.582 "get_zone_info": false, 00:08:46.582 "zone_management": false, 00:08:46.582 "zone_append": false, 00:08:46.582 "compare": false, 00:08:46.582 "compare_and_write": false, 00:08:46.582 "abort": true, 00:08:46.582 "seek_hole": false, 00:08:46.582 "seek_data": false, 00:08:46.582 "copy": true, 00:08:46.582 "nvme_iov_md": false 00:08:46.582 }, 00:08:46.582 "memory_domains": [ 00:08:46.582 { 00:08:46.582 "dma_device_id": "system", 00:08:46.582 "dma_device_type": 1 00:08:46.582 }, 00:08:46.582 { 00:08:46.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.582 "dma_device_type": 2 00:08:46.582 } 00:08:46.582 ], 00:08:46.582 "driver_specific": {} 00:08:46.582 } 00:08:46.582 ] 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.582 "name": "Existed_Raid", 00:08:46.582 "uuid": "86b2a3b8-7e77-4847-a8c4-e9826a688192", 00:08:46.582 "strip_size_kb": 64, 00:08:46.582 "state": "configuring", 00:08:46.582 "raid_level": "raid0", 00:08:46.582 "superblock": true, 00:08:46.582 "num_base_bdevs": 3, 00:08:46.582 "num_base_bdevs_discovered": 1, 00:08:46.582 "num_base_bdevs_operational": 3, 00:08:46.582 "base_bdevs_list": [ 00:08:46.582 { 00:08:46.582 "name": "BaseBdev1", 00:08:46.582 "uuid": "411b6d99-9a40-4978-bba9-a906cf118cdf", 00:08:46.582 "is_configured": true, 00:08:46.582 "data_offset": 2048, 00:08:46.582 "data_size": 63488 00:08:46.582 }, 00:08:46.582 { 00:08:46.582 "name": "BaseBdev2", 00:08:46.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.582 "is_configured": false, 00:08:46.582 "data_offset": 0, 00:08:46.582 "data_size": 0 00:08:46.582 }, 00:08:46.582 { 00:08:46.582 "name": "BaseBdev3", 00:08:46.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.582 "is_configured": false, 00:08:46.582 "data_offset": 0, 00:08:46.582 "data_size": 0 00:08:46.582 } 00:08:46.582 ] 00:08:46.582 }' 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.582 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.840 [2024-11-26 17:53:28.657794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.840 [2024-11-26 17:53:28.657927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.840 [2024-11-26 17:53:28.665896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.840 [2024-11-26 17:53:28.668216] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.840 [2024-11-26 17:53:28.668332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.840 [2024-11-26 17:53:28.668368] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.840 [2024-11-26 17:53:28.668395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.840 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.099 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.099 "name": "Existed_Raid", 00:08:47.099 "uuid": "52deb291-d2fd-457b-a677-2c76eb5c68e9", 00:08:47.099 "strip_size_kb": 64, 00:08:47.099 "state": "configuring", 00:08:47.099 "raid_level": "raid0", 00:08:47.099 "superblock": true, 00:08:47.099 "num_base_bdevs": 3, 00:08:47.099 "num_base_bdevs_discovered": 1, 00:08:47.099 "num_base_bdevs_operational": 3, 00:08:47.099 "base_bdevs_list": [ 00:08:47.099 { 00:08:47.099 "name": "BaseBdev1", 00:08:47.099 "uuid": "411b6d99-9a40-4978-bba9-a906cf118cdf", 00:08:47.099 "is_configured": true, 00:08:47.099 "data_offset": 2048, 00:08:47.099 "data_size": 63488 00:08:47.099 }, 00:08:47.099 { 00:08:47.099 "name": "BaseBdev2", 00:08:47.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.099 "is_configured": false, 00:08:47.099 "data_offset": 0, 00:08:47.099 "data_size": 0 00:08:47.099 }, 00:08:47.099 { 00:08:47.099 "name": "BaseBdev3", 00:08:47.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.099 "is_configured": false, 00:08:47.099 "data_offset": 0, 00:08:47.099 "data_size": 0 00:08:47.099 } 00:08:47.099 ] 00:08:47.099 }' 00:08:47.099 17:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.099 17:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.358 [2024-11-26 17:53:29.173324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.358 BaseBdev2 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:47.358 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.359 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.359 [ 00:08:47.359 { 00:08:47.359 "name": "BaseBdev2", 00:08:47.359 "aliases": [ 00:08:47.359 "19646280-7121-41ac-915d-c1c37628dde9" 00:08:47.359 ], 00:08:47.359 "product_name": "Malloc disk", 00:08:47.359 "block_size": 512, 00:08:47.359 "num_blocks": 65536, 00:08:47.359 "uuid": "19646280-7121-41ac-915d-c1c37628dde9", 00:08:47.359 "assigned_rate_limits": { 00:08:47.359 "rw_ios_per_sec": 0, 00:08:47.359 "rw_mbytes_per_sec": 0, 00:08:47.359 "r_mbytes_per_sec": 0, 00:08:47.359 "w_mbytes_per_sec": 0 00:08:47.359 }, 00:08:47.359 "claimed": true, 00:08:47.359 "claim_type": "exclusive_write", 00:08:47.359 "zoned": false, 00:08:47.359 "supported_io_types": { 00:08:47.359 "read": true, 00:08:47.359 "write": true, 00:08:47.359 "unmap": true, 00:08:47.359 "flush": true, 00:08:47.359 "reset": true, 00:08:47.359 "nvme_admin": false, 00:08:47.359 "nvme_io": false, 00:08:47.359 "nvme_io_md": false, 00:08:47.359 "write_zeroes": true, 00:08:47.359 "zcopy": true, 00:08:47.359 "get_zone_info": false, 00:08:47.359 "zone_management": false, 00:08:47.359 "zone_append": false, 00:08:47.359 "compare": false, 00:08:47.359 "compare_and_write": false, 00:08:47.359 "abort": true, 00:08:47.359 "seek_hole": false, 00:08:47.359 "seek_data": false, 00:08:47.359 "copy": true, 00:08:47.359 "nvme_iov_md": false 00:08:47.359 }, 00:08:47.359 "memory_domains": [ 00:08:47.359 { 00:08:47.359 "dma_device_id": "system", 00:08:47.359 "dma_device_type": 1 00:08:47.359 }, 00:08:47.359 { 00:08:47.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.359 "dma_device_type": 2 00:08:47.359 } 00:08:47.359 ], 00:08:47.359 "driver_specific": {} 00:08:47.359 } 00:08:47.359 ] 00:08:47.359 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.359 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:47.359 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:47.359 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.359 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.359 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.359 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.359 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.359 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.359 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.359 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.359 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.359 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.359 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.618 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.618 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.618 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.618 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.618 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.618 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.618 "name": "Existed_Raid", 00:08:47.618 "uuid": "52deb291-d2fd-457b-a677-2c76eb5c68e9", 00:08:47.618 "strip_size_kb": 64, 00:08:47.619 "state": "configuring", 00:08:47.619 "raid_level": "raid0", 00:08:47.619 "superblock": true, 00:08:47.619 "num_base_bdevs": 3, 00:08:47.619 "num_base_bdevs_discovered": 2, 00:08:47.619 "num_base_bdevs_operational": 3, 00:08:47.619 "base_bdevs_list": [ 00:08:47.619 { 00:08:47.619 "name": "BaseBdev1", 00:08:47.619 "uuid": "411b6d99-9a40-4978-bba9-a906cf118cdf", 00:08:47.619 "is_configured": true, 00:08:47.619 "data_offset": 2048, 00:08:47.619 "data_size": 63488 00:08:47.619 }, 00:08:47.619 { 00:08:47.619 "name": "BaseBdev2", 00:08:47.619 "uuid": "19646280-7121-41ac-915d-c1c37628dde9", 00:08:47.619 "is_configured": true, 00:08:47.619 "data_offset": 2048, 00:08:47.619 "data_size": 63488 00:08:47.619 }, 00:08:47.619 { 00:08:47.619 "name": "BaseBdev3", 00:08:47.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.619 "is_configured": false, 00:08:47.619 "data_offset": 0, 00:08:47.619 "data_size": 0 00:08:47.619 } 00:08:47.619 ] 00:08:47.619 }' 00:08:47.619 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.619 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.877 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:47.877 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.877 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.135 [2024-11-26 17:53:29.766186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.135 BaseBdev3 00:08:48.135 [2024-11-26 17:53:29.766583] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:48.135 [2024-11-26 17:53:29.766616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.135 [2024-11-26 17:53:29.766929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:48.135 [2024-11-26 17:53:29.767142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:48.135 [2024-11-26 17:53:29.767157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:48.135 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.135 [2024-11-26 17:53:29.767340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.135 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:48.135 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:48.135 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.135 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:48.135 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.135 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.135 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.135 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.135 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.135 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.135 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:48.135 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.135 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.135 [ 00:08:48.135 { 00:08:48.135 "name": "BaseBdev3", 00:08:48.135 "aliases": [ 00:08:48.135 "83376be6-e408-499e-b60d-06aa1d5c5fd1" 00:08:48.135 ], 00:08:48.135 "product_name": "Malloc disk", 00:08:48.135 "block_size": 512, 00:08:48.135 "num_blocks": 65536, 00:08:48.135 "uuid": "83376be6-e408-499e-b60d-06aa1d5c5fd1", 00:08:48.135 "assigned_rate_limits": { 00:08:48.135 "rw_ios_per_sec": 0, 00:08:48.135 "rw_mbytes_per_sec": 0, 00:08:48.135 "r_mbytes_per_sec": 0, 00:08:48.135 "w_mbytes_per_sec": 0 00:08:48.135 }, 00:08:48.135 "claimed": true, 00:08:48.135 "claim_type": "exclusive_write", 00:08:48.135 "zoned": false, 00:08:48.135 "supported_io_types": { 00:08:48.135 "read": true, 00:08:48.135 "write": true, 00:08:48.135 "unmap": true, 00:08:48.135 "flush": true, 00:08:48.135 "reset": true, 00:08:48.135 "nvme_admin": false, 00:08:48.135 "nvme_io": false, 00:08:48.135 "nvme_io_md": false, 00:08:48.135 "write_zeroes": true, 00:08:48.135 "zcopy": true, 00:08:48.135 "get_zone_info": false, 00:08:48.135 "zone_management": false, 00:08:48.135 "zone_append": false, 00:08:48.135 "compare": false, 00:08:48.135 "compare_and_write": false, 00:08:48.135 "abort": true, 00:08:48.135 "seek_hole": false, 00:08:48.135 "seek_data": false, 00:08:48.135 "copy": true, 00:08:48.135 "nvme_iov_md": false 00:08:48.135 }, 00:08:48.135 "memory_domains": [ 00:08:48.135 { 00:08:48.135 "dma_device_id": "system", 00:08:48.135 "dma_device_type": 1 00:08:48.135 }, 00:08:48.135 { 00:08:48.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.135 "dma_device_type": 2 00:08:48.135 } 00:08:48.135 ], 00:08:48.135 "driver_specific": {} 00:08:48.135 } 00:08:48.135 ] 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.136 "name": "Existed_Raid", 00:08:48.136 "uuid": "52deb291-d2fd-457b-a677-2c76eb5c68e9", 00:08:48.136 "strip_size_kb": 64, 00:08:48.136 "state": "online", 00:08:48.136 "raid_level": "raid0", 00:08:48.136 "superblock": true, 00:08:48.136 "num_base_bdevs": 3, 00:08:48.136 "num_base_bdevs_discovered": 3, 00:08:48.136 "num_base_bdevs_operational": 3, 00:08:48.136 "base_bdevs_list": [ 00:08:48.136 { 00:08:48.136 "name": "BaseBdev1", 00:08:48.136 "uuid": "411b6d99-9a40-4978-bba9-a906cf118cdf", 00:08:48.136 "is_configured": true, 00:08:48.136 "data_offset": 2048, 00:08:48.136 "data_size": 63488 00:08:48.136 }, 00:08:48.136 { 00:08:48.136 "name": "BaseBdev2", 00:08:48.136 "uuid": "19646280-7121-41ac-915d-c1c37628dde9", 00:08:48.136 "is_configured": true, 00:08:48.136 "data_offset": 2048, 00:08:48.136 "data_size": 63488 00:08:48.136 }, 00:08:48.136 { 00:08:48.136 "name": "BaseBdev3", 00:08:48.136 "uuid": "83376be6-e408-499e-b60d-06aa1d5c5fd1", 00:08:48.136 "is_configured": true, 00:08:48.136 "data_offset": 2048, 00:08:48.136 "data_size": 63488 00:08:48.136 } 00:08:48.136 ] 00:08:48.136 }' 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.136 17:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.396 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:48.396 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:48.396 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:48.396 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:48.396 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:48.396 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:48.396 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:48.396 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:48.396 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.396 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.396 [2024-11-26 17:53:30.245901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.656 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.656 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:48.656 "name": "Existed_Raid", 00:08:48.656 "aliases": [ 00:08:48.656 "52deb291-d2fd-457b-a677-2c76eb5c68e9" 00:08:48.656 ], 00:08:48.656 "product_name": "Raid Volume", 00:08:48.656 "block_size": 512, 00:08:48.656 "num_blocks": 190464, 00:08:48.656 "uuid": "52deb291-d2fd-457b-a677-2c76eb5c68e9", 00:08:48.656 "assigned_rate_limits": { 00:08:48.656 "rw_ios_per_sec": 0, 00:08:48.656 "rw_mbytes_per_sec": 0, 00:08:48.656 "r_mbytes_per_sec": 0, 00:08:48.656 "w_mbytes_per_sec": 0 00:08:48.656 }, 00:08:48.656 "claimed": false, 00:08:48.656 "zoned": false, 00:08:48.656 "supported_io_types": { 00:08:48.656 "read": true, 00:08:48.656 "write": true, 00:08:48.656 "unmap": true, 00:08:48.656 "flush": true, 00:08:48.656 "reset": true, 00:08:48.656 "nvme_admin": false, 00:08:48.656 "nvme_io": false, 00:08:48.656 "nvme_io_md": false, 00:08:48.656 "write_zeroes": true, 00:08:48.656 "zcopy": false, 00:08:48.656 "get_zone_info": false, 00:08:48.656 "zone_management": false, 00:08:48.656 "zone_append": false, 00:08:48.656 "compare": false, 00:08:48.656 "compare_and_write": false, 00:08:48.656 "abort": false, 00:08:48.656 "seek_hole": false, 00:08:48.656 "seek_data": false, 00:08:48.656 "copy": false, 00:08:48.656 "nvme_iov_md": false 00:08:48.656 }, 00:08:48.656 "memory_domains": [ 00:08:48.656 { 00:08:48.656 "dma_device_id": "system", 00:08:48.656 "dma_device_type": 1 00:08:48.656 }, 00:08:48.656 { 00:08:48.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.656 "dma_device_type": 2 00:08:48.656 }, 00:08:48.656 { 00:08:48.656 "dma_device_id": "system", 00:08:48.656 "dma_device_type": 1 00:08:48.656 }, 00:08:48.656 { 00:08:48.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.656 "dma_device_type": 2 00:08:48.656 }, 00:08:48.656 { 00:08:48.656 "dma_device_id": "system", 00:08:48.656 "dma_device_type": 1 00:08:48.656 }, 00:08:48.656 { 00:08:48.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.656 "dma_device_type": 2 00:08:48.656 } 00:08:48.656 ], 00:08:48.656 "driver_specific": { 00:08:48.656 "raid": { 00:08:48.656 "uuid": "52deb291-d2fd-457b-a677-2c76eb5c68e9", 00:08:48.656 "strip_size_kb": 64, 00:08:48.656 "state": "online", 00:08:48.656 "raid_level": "raid0", 00:08:48.656 "superblock": true, 00:08:48.656 "num_base_bdevs": 3, 00:08:48.656 "num_base_bdevs_discovered": 3, 00:08:48.656 "num_base_bdevs_operational": 3, 00:08:48.656 "base_bdevs_list": [ 00:08:48.656 { 00:08:48.656 "name": "BaseBdev1", 00:08:48.656 "uuid": "411b6d99-9a40-4978-bba9-a906cf118cdf", 00:08:48.656 "is_configured": true, 00:08:48.656 "data_offset": 2048, 00:08:48.656 "data_size": 63488 00:08:48.656 }, 00:08:48.656 { 00:08:48.656 "name": "BaseBdev2", 00:08:48.656 "uuid": "19646280-7121-41ac-915d-c1c37628dde9", 00:08:48.656 "is_configured": true, 00:08:48.656 "data_offset": 2048, 00:08:48.656 "data_size": 63488 00:08:48.656 }, 00:08:48.656 { 00:08:48.656 "name": "BaseBdev3", 00:08:48.656 "uuid": "83376be6-e408-499e-b60d-06aa1d5c5fd1", 00:08:48.656 "is_configured": true, 00:08:48.656 "data_offset": 2048, 00:08:48.656 "data_size": 63488 00:08:48.656 } 00:08:48.656 ] 00:08:48.656 } 00:08:48.656 } 00:08:48.656 }' 00:08:48.656 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:48.656 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:48.656 BaseBdev2 00:08:48.656 BaseBdev3' 00:08:48.656 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.656 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:48.656 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.656 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:48.656 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.657 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.916 [2024-11-26 17:53:30.529273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:48.916 [2024-11-26 17:53:30.529380] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:48.916 [2024-11-26 17:53:30.529480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.916 "name": "Existed_Raid", 00:08:48.916 "uuid": "52deb291-d2fd-457b-a677-2c76eb5c68e9", 00:08:48.916 "strip_size_kb": 64, 00:08:48.916 "state": "offline", 00:08:48.916 "raid_level": "raid0", 00:08:48.916 "superblock": true, 00:08:48.916 "num_base_bdevs": 3, 00:08:48.916 "num_base_bdevs_discovered": 2, 00:08:48.916 "num_base_bdevs_operational": 2, 00:08:48.916 "base_bdevs_list": [ 00:08:48.916 { 00:08:48.916 "name": null, 00:08:48.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.916 "is_configured": false, 00:08:48.916 "data_offset": 0, 00:08:48.916 "data_size": 63488 00:08:48.916 }, 00:08:48.916 { 00:08:48.916 "name": "BaseBdev2", 00:08:48.916 "uuid": "19646280-7121-41ac-915d-c1c37628dde9", 00:08:48.916 "is_configured": true, 00:08:48.916 "data_offset": 2048, 00:08:48.916 "data_size": 63488 00:08:48.916 }, 00:08:48.916 { 00:08:48.916 "name": "BaseBdev3", 00:08:48.916 "uuid": "83376be6-e408-499e-b60d-06aa1d5c5fd1", 00:08:48.916 "is_configured": true, 00:08:48.916 "data_offset": 2048, 00:08:48.916 "data_size": 63488 00:08:48.916 } 00:08:48.916 ] 00:08:48.916 }' 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.916 17:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.485 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:49.485 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.485 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.485 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.485 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:49.485 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.485 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.485 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:49.485 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:49.485 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:49.485 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.485 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.486 [2024-11-26 17:53:31.158227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:49.486 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.486 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:49.486 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.486 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.486 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:49.486 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.486 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.486 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.486 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:49.486 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:49.486 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:49.486 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.486 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.486 [2024-11-26 17:53:31.333653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:49.486 [2024-11-26 17:53:31.333802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.744 BaseBdev2 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.744 [ 00:08:49.744 { 00:08:49.744 "name": "BaseBdev2", 00:08:49.744 "aliases": [ 00:08:49.744 "e2a57c6f-af6b-4cb2-b1e4-1bad123eaa56" 00:08:49.744 ], 00:08:49.744 "product_name": "Malloc disk", 00:08:49.744 "block_size": 512, 00:08:49.744 "num_blocks": 65536, 00:08:49.744 "uuid": "e2a57c6f-af6b-4cb2-b1e4-1bad123eaa56", 00:08:49.744 "assigned_rate_limits": { 00:08:49.744 "rw_ios_per_sec": 0, 00:08:49.744 "rw_mbytes_per_sec": 0, 00:08:49.744 "r_mbytes_per_sec": 0, 00:08:49.744 "w_mbytes_per_sec": 0 00:08:49.744 }, 00:08:49.744 "claimed": false, 00:08:49.744 "zoned": false, 00:08:49.744 "supported_io_types": { 00:08:49.744 "read": true, 00:08:49.744 "write": true, 00:08:49.744 "unmap": true, 00:08:49.744 "flush": true, 00:08:49.744 "reset": true, 00:08:49.744 "nvme_admin": false, 00:08:49.744 "nvme_io": false, 00:08:49.744 "nvme_io_md": false, 00:08:49.744 "write_zeroes": true, 00:08:49.744 "zcopy": true, 00:08:49.744 "get_zone_info": false, 00:08:49.744 "zone_management": false, 00:08:49.744 "zone_append": false, 00:08:49.744 "compare": false, 00:08:49.744 "compare_and_write": false, 00:08:49.744 "abort": true, 00:08:49.744 "seek_hole": false, 00:08:49.744 "seek_data": false, 00:08:49.744 "copy": true, 00:08:49.744 "nvme_iov_md": false 00:08:49.744 }, 00:08:49.744 "memory_domains": [ 00:08:49.744 { 00:08:49.744 "dma_device_id": "system", 00:08:49.744 "dma_device_type": 1 00:08:49.744 }, 00:08:49.744 { 00:08:49.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.744 "dma_device_type": 2 00:08:49.744 } 00:08:49.744 ], 00:08:49.744 "driver_specific": {} 00:08:49.744 } 00:08:49.744 ] 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.744 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.003 BaseBdev3 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.003 [ 00:08:50.003 { 00:08:50.003 "name": "BaseBdev3", 00:08:50.003 "aliases": [ 00:08:50.003 "5dd86a81-1f6a-41f5-a0d2-a3fd35df2d52" 00:08:50.003 ], 00:08:50.003 "product_name": "Malloc disk", 00:08:50.003 "block_size": 512, 00:08:50.003 "num_blocks": 65536, 00:08:50.003 "uuid": "5dd86a81-1f6a-41f5-a0d2-a3fd35df2d52", 00:08:50.003 "assigned_rate_limits": { 00:08:50.003 "rw_ios_per_sec": 0, 00:08:50.003 "rw_mbytes_per_sec": 0, 00:08:50.003 "r_mbytes_per_sec": 0, 00:08:50.003 "w_mbytes_per_sec": 0 00:08:50.003 }, 00:08:50.003 "claimed": false, 00:08:50.003 "zoned": false, 00:08:50.003 "supported_io_types": { 00:08:50.003 "read": true, 00:08:50.003 "write": true, 00:08:50.003 "unmap": true, 00:08:50.003 "flush": true, 00:08:50.003 "reset": true, 00:08:50.003 "nvme_admin": false, 00:08:50.003 "nvme_io": false, 00:08:50.003 "nvme_io_md": false, 00:08:50.003 "write_zeroes": true, 00:08:50.003 "zcopy": true, 00:08:50.003 "get_zone_info": false, 00:08:50.003 "zone_management": false, 00:08:50.003 "zone_append": false, 00:08:50.003 "compare": false, 00:08:50.003 "compare_and_write": false, 00:08:50.003 "abort": true, 00:08:50.003 "seek_hole": false, 00:08:50.003 "seek_data": false, 00:08:50.003 "copy": true, 00:08:50.003 "nvme_iov_md": false 00:08:50.003 }, 00:08:50.003 "memory_domains": [ 00:08:50.003 { 00:08:50.003 "dma_device_id": "system", 00:08:50.003 "dma_device_type": 1 00:08:50.003 }, 00:08:50.003 { 00:08:50.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.003 "dma_device_type": 2 00:08:50.003 } 00:08:50.003 ], 00:08:50.003 "driver_specific": {} 00:08:50.003 } 00:08:50.003 ] 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.003 [2024-11-26 17:53:31.683353] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:50.003 [2024-11-26 17:53:31.683494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:50.003 [2024-11-26 17:53:31.683581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.003 [2024-11-26 17:53:31.685902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.003 "name": "Existed_Raid", 00:08:50.003 "uuid": "1fbe23b2-3543-4e35-8af3-6d7a426d9abc", 00:08:50.003 "strip_size_kb": 64, 00:08:50.003 "state": "configuring", 00:08:50.003 "raid_level": "raid0", 00:08:50.003 "superblock": true, 00:08:50.003 "num_base_bdevs": 3, 00:08:50.003 "num_base_bdevs_discovered": 2, 00:08:50.003 "num_base_bdevs_operational": 3, 00:08:50.003 "base_bdevs_list": [ 00:08:50.003 { 00:08:50.003 "name": "BaseBdev1", 00:08:50.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.003 "is_configured": false, 00:08:50.003 "data_offset": 0, 00:08:50.003 "data_size": 0 00:08:50.003 }, 00:08:50.003 { 00:08:50.003 "name": "BaseBdev2", 00:08:50.003 "uuid": "e2a57c6f-af6b-4cb2-b1e4-1bad123eaa56", 00:08:50.003 "is_configured": true, 00:08:50.003 "data_offset": 2048, 00:08:50.003 "data_size": 63488 00:08:50.003 }, 00:08:50.003 { 00:08:50.003 "name": "BaseBdev3", 00:08:50.003 "uuid": "5dd86a81-1f6a-41f5-a0d2-a3fd35df2d52", 00:08:50.003 "is_configured": true, 00:08:50.003 "data_offset": 2048, 00:08:50.003 "data_size": 63488 00:08:50.003 } 00:08:50.003 ] 00:08:50.003 }' 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.003 17:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.572 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.573 [2024-11-26 17:53:32.174528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.573 "name": "Existed_Raid", 00:08:50.573 "uuid": "1fbe23b2-3543-4e35-8af3-6d7a426d9abc", 00:08:50.573 "strip_size_kb": 64, 00:08:50.573 "state": "configuring", 00:08:50.573 "raid_level": "raid0", 00:08:50.573 "superblock": true, 00:08:50.573 "num_base_bdevs": 3, 00:08:50.573 "num_base_bdevs_discovered": 1, 00:08:50.573 "num_base_bdevs_operational": 3, 00:08:50.573 "base_bdevs_list": [ 00:08:50.573 { 00:08:50.573 "name": "BaseBdev1", 00:08:50.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.573 "is_configured": false, 00:08:50.573 "data_offset": 0, 00:08:50.573 "data_size": 0 00:08:50.573 }, 00:08:50.573 { 00:08:50.573 "name": null, 00:08:50.573 "uuid": "e2a57c6f-af6b-4cb2-b1e4-1bad123eaa56", 00:08:50.573 "is_configured": false, 00:08:50.573 "data_offset": 0, 00:08:50.573 "data_size": 63488 00:08:50.573 }, 00:08:50.573 { 00:08:50.573 "name": "BaseBdev3", 00:08:50.573 "uuid": "5dd86a81-1f6a-41f5-a0d2-a3fd35df2d52", 00:08:50.573 "is_configured": true, 00:08:50.573 "data_offset": 2048, 00:08:50.573 "data_size": 63488 00:08:50.573 } 00:08:50.573 ] 00:08:50.573 }' 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.573 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.832 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.832 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.832 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.832 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:50.832 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.832 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:50.832 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:50.832 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.832 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.092 [2024-11-26 17:53:32.712433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.092 BaseBdev1 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.092 [ 00:08:51.092 { 00:08:51.092 "name": "BaseBdev1", 00:08:51.092 "aliases": [ 00:08:51.092 "454d2bd2-5a7a-48ef-9d0a-c16d8245d1bf" 00:08:51.092 ], 00:08:51.092 "product_name": "Malloc disk", 00:08:51.092 "block_size": 512, 00:08:51.092 "num_blocks": 65536, 00:08:51.092 "uuid": "454d2bd2-5a7a-48ef-9d0a-c16d8245d1bf", 00:08:51.092 "assigned_rate_limits": { 00:08:51.092 "rw_ios_per_sec": 0, 00:08:51.092 "rw_mbytes_per_sec": 0, 00:08:51.092 "r_mbytes_per_sec": 0, 00:08:51.092 "w_mbytes_per_sec": 0 00:08:51.092 }, 00:08:51.092 "claimed": true, 00:08:51.092 "claim_type": "exclusive_write", 00:08:51.092 "zoned": false, 00:08:51.092 "supported_io_types": { 00:08:51.092 "read": true, 00:08:51.092 "write": true, 00:08:51.092 "unmap": true, 00:08:51.092 "flush": true, 00:08:51.092 "reset": true, 00:08:51.092 "nvme_admin": false, 00:08:51.092 "nvme_io": false, 00:08:51.092 "nvme_io_md": false, 00:08:51.092 "write_zeroes": true, 00:08:51.092 "zcopy": true, 00:08:51.092 "get_zone_info": false, 00:08:51.092 "zone_management": false, 00:08:51.092 "zone_append": false, 00:08:51.092 "compare": false, 00:08:51.092 "compare_and_write": false, 00:08:51.092 "abort": true, 00:08:51.092 "seek_hole": false, 00:08:51.092 "seek_data": false, 00:08:51.092 "copy": true, 00:08:51.092 "nvme_iov_md": false 00:08:51.092 }, 00:08:51.092 "memory_domains": [ 00:08:51.092 { 00:08:51.092 "dma_device_id": "system", 00:08:51.092 "dma_device_type": 1 00:08:51.092 }, 00:08:51.092 { 00:08:51.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.092 "dma_device_type": 2 00:08:51.092 } 00:08:51.092 ], 00:08:51.092 "driver_specific": {} 00:08:51.092 } 00:08:51.092 ] 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.092 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.092 "name": "Existed_Raid", 00:08:51.092 "uuid": "1fbe23b2-3543-4e35-8af3-6d7a426d9abc", 00:08:51.092 "strip_size_kb": 64, 00:08:51.092 "state": "configuring", 00:08:51.092 "raid_level": "raid0", 00:08:51.092 "superblock": true, 00:08:51.092 "num_base_bdevs": 3, 00:08:51.092 "num_base_bdevs_discovered": 2, 00:08:51.092 "num_base_bdevs_operational": 3, 00:08:51.092 "base_bdevs_list": [ 00:08:51.092 { 00:08:51.092 "name": "BaseBdev1", 00:08:51.092 "uuid": "454d2bd2-5a7a-48ef-9d0a-c16d8245d1bf", 00:08:51.092 "is_configured": true, 00:08:51.092 "data_offset": 2048, 00:08:51.092 "data_size": 63488 00:08:51.092 }, 00:08:51.093 { 00:08:51.093 "name": null, 00:08:51.093 "uuid": "e2a57c6f-af6b-4cb2-b1e4-1bad123eaa56", 00:08:51.093 "is_configured": false, 00:08:51.093 "data_offset": 0, 00:08:51.093 "data_size": 63488 00:08:51.093 }, 00:08:51.093 { 00:08:51.093 "name": "BaseBdev3", 00:08:51.093 "uuid": "5dd86a81-1f6a-41f5-a0d2-a3fd35df2d52", 00:08:51.093 "is_configured": true, 00:08:51.093 "data_offset": 2048, 00:08:51.093 "data_size": 63488 00:08:51.093 } 00:08:51.093 ] 00:08:51.093 }' 00:08:51.093 17:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.093 17:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.392 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.392 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.392 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.392 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:51.392 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.392 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:51.392 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:51.392 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.392 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.392 [2024-11-26 17:53:33.231668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:51.392 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.653 "name": "Existed_Raid", 00:08:51.653 "uuid": "1fbe23b2-3543-4e35-8af3-6d7a426d9abc", 00:08:51.653 "strip_size_kb": 64, 00:08:51.653 "state": "configuring", 00:08:51.653 "raid_level": "raid0", 00:08:51.653 "superblock": true, 00:08:51.653 "num_base_bdevs": 3, 00:08:51.653 "num_base_bdevs_discovered": 1, 00:08:51.653 "num_base_bdevs_operational": 3, 00:08:51.653 "base_bdevs_list": [ 00:08:51.653 { 00:08:51.653 "name": "BaseBdev1", 00:08:51.653 "uuid": "454d2bd2-5a7a-48ef-9d0a-c16d8245d1bf", 00:08:51.653 "is_configured": true, 00:08:51.653 "data_offset": 2048, 00:08:51.653 "data_size": 63488 00:08:51.653 }, 00:08:51.653 { 00:08:51.653 "name": null, 00:08:51.653 "uuid": "e2a57c6f-af6b-4cb2-b1e4-1bad123eaa56", 00:08:51.653 "is_configured": false, 00:08:51.653 "data_offset": 0, 00:08:51.653 "data_size": 63488 00:08:51.653 }, 00:08:51.653 { 00:08:51.653 "name": null, 00:08:51.653 "uuid": "5dd86a81-1f6a-41f5-a0d2-a3fd35df2d52", 00:08:51.653 "is_configured": false, 00:08:51.653 "data_offset": 0, 00:08:51.653 "data_size": 63488 00:08:51.653 } 00:08:51.653 ] 00:08:51.653 }' 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.653 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.912 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.912 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:51.912 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.913 [2024-11-26 17:53:33.754832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.913 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.172 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.172 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.172 "name": "Existed_Raid", 00:08:52.172 "uuid": "1fbe23b2-3543-4e35-8af3-6d7a426d9abc", 00:08:52.172 "strip_size_kb": 64, 00:08:52.172 "state": "configuring", 00:08:52.172 "raid_level": "raid0", 00:08:52.172 "superblock": true, 00:08:52.172 "num_base_bdevs": 3, 00:08:52.172 "num_base_bdevs_discovered": 2, 00:08:52.172 "num_base_bdevs_operational": 3, 00:08:52.172 "base_bdevs_list": [ 00:08:52.172 { 00:08:52.172 "name": "BaseBdev1", 00:08:52.172 "uuid": "454d2bd2-5a7a-48ef-9d0a-c16d8245d1bf", 00:08:52.172 "is_configured": true, 00:08:52.172 "data_offset": 2048, 00:08:52.172 "data_size": 63488 00:08:52.172 }, 00:08:52.172 { 00:08:52.172 "name": null, 00:08:52.172 "uuid": "e2a57c6f-af6b-4cb2-b1e4-1bad123eaa56", 00:08:52.172 "is_configured": false, 00:08:52.172 "data_offset": 0, 00:08:52.172 "data_size": 63488 00:08:52.172 }, 00:08:52.172 { 00:08:52.172 "name": "BaseBdev3", 00:08:52.172 "uuid": "5dd86a81-1f6a-41f5-a0d2-a3fd35df2d52", 00:08:52.172 "is_configured": true, 00:08:52.172 "data_offset": 2048, 00:08:52.172 "data_size": 63488 00:08:52.172 } 00:08:52.172 ] 00:08:52.172 }' 00:08:52.172 17:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.172 17:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.431 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.432 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:52.432 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.432 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.432 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.432 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:52.432 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:52.432 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.432 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.432 [2024-11-26 17:53:34.262004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.691 "name": "Existed_Raid", 00:08:52.691 "uuid": "1fbe23b2-3543-4e35-8af3-6d7a426d9abc", 00:08:52.691 "strip_size_kb": 64, 00:08:52.691 "state": "configuring", 00:08:52.691 "raid_level": "raid0", 00:08:52.691 "superblock": true, 00:08:52.691 "num_base_bdevs": 3, 00:08:52.691 "num_base_bdevs_discovered": 1, 00:08:52.691 "num_base_bdevs_operational": 3, 00:08:52.691 "base_bdevs_list": [ 00:08:52.691 { 00:08:52.691 "name": null, 00:08:52.691 "uuid": "454d2bd2-5a7a-48ef-9d0a-c16d8245d1bf", 00:08:52.691 "is_configured": false, 00:08:52.691 "data_offset": 0, 00:08:52.691 "data_size": 63488 00:08:52.691 }, 00:08:52.691 { 00:08:52.691 "name": null, 00:08:52.691 "uuid": "e2a57c6f-af6b-4cb2-b1e4-1bad123eaa56", 00:08:52.691 "is_configured": false, 00:08:52.691 "data_offset": 0, 00:08:52.691 "data_size": 63488 00:08:52.691 }, 00:08:52.691 { 00:08:52.691 "name": "BaseBdev3", 00:08:52.691 "uuid": "5dd86a81-1f6a-41f5-a0d2-a3fd35df2d52", 00:08:52.691 "is_configured": true, 00:08:52.691 "data_offset": 2048, 00:08:52.691 "data_size": 63488 00:08:52.691 } 00:08:52.691 ] 00:08:52.691 }' 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.691 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.272 [2024-11-26 17:53:34.894568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.272 "name": "Existed_Raid", 00:08:53.272 "uuid": "1fbe23b2-3543-4e35-8af3-6d7a426d9abc", 00:08:53.272 "strip_size_kb": 64, 00:08:53.272 "state": "configuring", 00:08:53.272 "raid_level": "raid0", 00:08:53.272 "superblock": true, 00:08:53.272 "num_base_bdevs": 3, 00:08:53.272 "num_base_bdevs_discovered": 2, 00:08:53.272 "num_base_bdevs_operational": 3, 00:08:53.272 "base_bdevs_list": [ 00:08:53.272 { 00:08:53.272 "name": null, 00:08:53.272 "uuid": "454d2bd2-5a7a-48ef-9d0a-c16d8245d1bf", 00:08:53.272 "is_configured": false, 00:08:53.272 "data_offset": 0, 00:08:53.272 "data_size": 63488 00:08:53.272 }, 00:08:53.272 { 00:08:53.272 "name": "BaseBdev2", 00:08:53.272 "uuid": "e2a57c6f-af6b-4cb2-b1e4-1bad123eaa56", 00:08:53.272 "is_configured": true, 00:08:53.272 "data_offset": 2048, 00:08:53.272 "data_size": 63488 00:08:53.272 }, 00:08:53.272 { 00:08:53.272 "name": "BaseBdev3", 00:08:53.272 "uuid": "5dd86a81-1f6a-41f5-a0d2-a3fd35df2d52", 00:08:53.272 "is_configured": true, 00:08:53.272 "data_offset": 2048, 00:08:53.272 "data_size": 63488 00:08:53.272 } 00:08:53.272 ] 00:08:53.272 }' 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.272 17:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.533 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.533 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.533 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.533 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:53.533 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 454d2bd2-5a7a-48ef-9d0a-c16d8245d1bf 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.793 [2024-11-26 17:53:35.475451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:53.793 [2024-11-26 17:53:35.475812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:53.793 [2024-11-26 17:53:35.475870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:53.793 [2024-11-26 17:53:35.476184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:53.793 [2024-11-26 17:53:35.476389] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:53.793 [2024-11-26 17:53:35.476436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:53.793 NewBaseBdev 00:08:53.793 [2024-11-26 17:53:35.476641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.793 [ 00:08:53.793 { 00:08:53.793 "name": "NewBaseBdev", 00:08:53.793 "aliases": [ 00:08:53.793 "454d2bd2-5a7a-48ef-9d0a-c16d8245d1bf" 00:08:53.793 ], 00:08:53.793 "product_name": "Malloc disk", 00:08:53.793 "block_size": 512, 00:08:53.793 "num_blocks": 65536, 00:08:53.793 "uuid": "454d2bd2-5a7a-48ef-9d0a-c16d8245d1bf", 00:08:53.793 "assigned_rate_limits": { 00:08:53.793 "rw_ios_per_sec": 0, 00:08:53.793 "rw_mbytes_per_sec": 0, 00:08:53.793 "r_mbytes_per_sec": 0, 00:08:53.793 "w_mbytes_per_sec": 0 00:08:53.793 }, 00:08:53.793 "claimed": true, 00:08:53.793 "claim_type": "exclusive_write", 00:08:53.793 "zoned": false, 00:08:53.793 "supported_io_types": { 00:08:53.793 "read": true, 00:08:53.793 "write": true, 00:08:53.793 "unmap": true, 00:08:53.793 "flush": true, 00:08:53.793 "reset": true, 00:08:53.793 "nvme_admin": false, 00:08:53.793 "nvme_io": false, 00:08:53.793 "nvme_io_md": false, 00:08:53.793 "write_zeroes": true, 00:08:53.793 "zcopy": true, 00:08:53.793 "get_zone_info": false, 00:08:53.793 "zone_management": false, 00:08:53.793 "zone_append": false, 00:08:53.793 "compare": false, 00:08:53.793 "compare_and_write": false, 00:08:53.793 "abort": true, 00:08:53.793 "seek_hole": false, 00:08:53.793 "seek_data": false, 00:08:53.793 "copy": true, 00:08:53.793 "nvme_iov_md": false 00:08:53.793 }, 00:08:53.793 "memory_domains": [ 00:08:53.793 { 00:08:53.793 "dma_device_id": "system", 00:08:53.793 "dma_device_type": 1 00:08:53.793 }, 00:08:53.793 { 00:08:53.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.793 "dma_device_type": 2 00:08:53.793 } 00:08:53.793 ], 00:08:53.793 "driver_specific": {} 00:08:53.793 } 00:08:53.793 ] 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.793 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.794 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.794 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.794 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.794 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.794 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.794 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.794 "name": "Existed_Raid", 00:08:53.794 "uuid": "1fbe23b2-3543-4e35-8af3-6d7a426d9abc", 00:08:53.794 "strip_size_kb": 64, 00:08:53.794 "state": "online", 00:08:53.794 "raid_level": "raid0", 00:08:53.794 "superblock": true, 00:08:53.794 "num_base_bdevs": 3, 00:08:53.794 "num_base_bdevs_discovered": 3, 00:08:53.794 "num_base_bdevs_operational": 3, 00:08:53.794 "base_bdevs_list": [ 00:08:53.794 { 00:08:53.794 "name": "NewBaseBdev", 00:08:53.794 "uuid": "454d2bd2-5a7a-48ef-9d0a-c16d8245d1bf", 00:08:53.794 "is_configured": true, 00:08:53.794 "data_offset": 2048, 00:08:53.794 "data_size": 63488 00:08:53.794 }, 00:08:53.794 { 00:08:53.794 "name": "BaseBdev2", 00:08:53.794 "uuid": "e2a57c6f-af6b-4cb2-b1e4-1bad123eaa56", 00:08:53.794 "is_configured": true, 00:08:53.794 "data_offset": 2048, 00:08:53.794 "data_size": 63488 00:08:53.794 }, 00:08:53.794 { 00:08:53.794 "name": "BaseBdev3", 00:08:53.794 "uuid": "5dd86a81-1f6a-41f5-a0d2-a3fd35df2d52", 00:08:53.794 "is_configured": true, 00:08:53.794 "data_offset": 2048, 00:08:53.794 "data_size": 63488 00:08:53.794 } 00:08:53.794 ] 00:08:53.794 }' 00:08:53.794 17:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.794 17:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:54.364 [2024-11-26 17:53:36.019008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:54.364 "name": "Existed_Raid", 00:08:54.364 "aliases": [ 00:08:54.364 "1fbe23b2-3543-4e35-8af3-6d7a426d9abc" 00:08:54.364 ], 00:08:54.364 "product_name": "Raid Volume", 00:08:54.364 "block_size": 512, 00:08:54.364 "num_blocks": 190464, 00:08:54.364 "uuid": "1fbe23b2-3543-4e35-8af3-6d7a426d9abc", 00:08:54.364 "assigned_rate_limits": { 00:08:54.364 "rw_ios_per_sec": 0, 00:08:54.364 "rw_mbytes_per_sec": 0, 00:08:54.364 "r_mbytes_per_sec": 0, 00:08:54.364 "w_mbytes_per_sec": 0 00:08:54.364 }, 00:08:54.364 "claimed": false, 00:08:54.364 "zoned": false, 00:08:54.364 "supported_io_types": { 00:08:54.364 "read": true, 00:08:54.364 "write": true, 00:08:54.364 "unmap": true, 00:08:54.364 "flush": true, 00:08:54.364 "reset": true, 00:08:54.364 "nvme_admin": false, 00:08:54.364 "nvme_io": false, 00:08:54.364 "nvme_io_md": false, 00:08:54.364 "write_zeroes": true, 00:08:54.364 "zcopy": false, 00:08:54.364 "get_zone_info": false, 00:08:54.364 "zone_management": false, 00:08:54.364 "zone_append": false, 00:08:54.364 "compare": false, 00:08:54.364 "compare_and_write": false, 00:08:54.364 "abort": false, 00:08:54.364 "seek_hole": false, 00:08:54.364 "seek_data": false, 00:08:54.364 "copy": false, 00:08:54.364 "nvme_iov_md": false 00:08:54.364 }, 00:08:54.364 "memory_domains": [ 00:08:54.364 { 00:08:54.364 "dma_device_id": "system", 00:08:54.364 "dma_device_type": 1 00:08:54.364 }, 00:08:54.364 { 00:08:54.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.364 "dma_device_type": 2 00:08:54.364 }, 00:08:54.364 { 00:08:54.364 "dma_device_id": "system", 00:08:54.364 "dma_device_type": 1 00:08:54.364 }, 00:08:54.364 { 00:08:54.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.364 "dma_device_type": 2 00:08:54.364 }, 00:08:54.364 { 00:08:54.364 "dma_device_id": "system", 00:08:54.364 "dma_device_type": 1 00:08:54.364 }, 00:08:54.364 { 00:08:54.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.364 "dma_device_type": 2 00:08:54.364 } 00:08:54.364 ], 00:08:54.364 "driver_specific": { 00:08:54.364 "raid": { 00:08:54.364 "uuid": "1fbe23b2-3543-4e35-8af3-6d7a426d9abc", 00:08:54.364 "strip_size_kb": 64, 00:08:54.364 "state": "online", 00:08:54.364 "raid_level": "raid0", 00:08:54.364 "superblock": true, 00:08:54.364 "num_base_bdevs": 3, 00:08:54.364 "num_base_bdevs_discovered": 3, 00:08:54.364 "num_base_bdevs_operational": 3, 00:08:54.364 "base_bdevs_list": [ 00:08:54.364 { 00:08:54.364 "name": "NewBaseBdev", 00:08:54.364 "uuid": "454d2bd2-5a7a-48ef-9d0a-c16d8245d1bf", 00:08:54.364 "is_configured": true, 00:08:54.364 "data_offset": 2048, 00:08:54.364 "data_size": 63488 00:08:54.364 }, 00:08:54.364 { 00:08:54.364 "name": "BaseBdev2", 00:08:54.364 "uuid": "e2a57c6f-af6b-4cb2-b1e4-1bad123eaa56", 00:08:54.364 "is_configured": true, 00:08:54.364 "data_offset": 2048, 00:08:54.364 "data_size": 63488 00:08:54.364 }, 00:08:54.364 { 00:08:54.364 "name": "BaseBdev3", 00:08:54.364 "uuid": "5dd86a81-1f6a-41f5-a0d2-a3fd35df2d52", 00:08:54.364 "is_configured": true, 00:08:54.364 "data_offset": 2048, 00:08:54.364 "data_size": 63488 00:08:54.364 } 00:08:54.364 ] 00:08:54.364 } 00:08:54.364 } 00:08:54.364 }' 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:54.364 BaseBdev2 00:08:54.364 BaseBdev3' 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.364 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.624 [2024-11-26 17:53:36.330135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.624 [2024-11-26 17:53:36.330242] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.624 [2024-11-26 17:53:36.330371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.624 [2024-11-26 17:53:36.330455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.624 [2024-11-26 17:53:36.330508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64640 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64640 ']' 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64640 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64640 00:08:54.624 killing process with pid 64640 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64640' 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64640 00:08:54.624 [2024-11-26 17:53:36.379791] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.624 17:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64640 00:08:54.884 [2024-11-26 17:53:36.733406] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.264 17:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:56.264 00:08:56.264 real 0m11.466s 00:08:56.264 user 0m18.147s 00:08:56.264 sys 0m1.982s 00:08:56.264 ************************************ 00:08:56.265 END TEST raid_state_function_test_sb 00:08:56.265 ************************************ 00:08:56.265 17:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.265 17:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.265 17:53:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:56.265 17:53:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:56.265 17:53:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.265 17:53:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.265 ************************************ 00:08:56.265 START TEST raid_superblock_test 00:08:56.265 ************************************ 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65266 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65266 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65266 ']' 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.265 17:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.524 [2024-11-26 17:53:38.158588] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:08:56.524 [2024-11-26 17:53:38.158797] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65266 ] 00:08:56.524 [2024-11-26 17:53:38.333092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.784 [2024-11-26 17:53:38.460315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.043 [2024-11-26 17:53:38.674809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.043 [2024-11-26 17:53:38.674882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.304 malloc1 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.304 [2024-11-26 17:53:39.123467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:57.304 [2024-11-26 17:53:39.123605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.304 [2024-11-26 17:53:39.123653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:57.304 [2024-11-26 17:53:39.123691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.304 [2024-11-26 17:53:39.126210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.304 [2024-11-26 17:53:39.126301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:57.304 pt1 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.304 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.564 malloc2 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.564 [2024-11-26 17:53:39.190430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:57.564 [2024-11-26 17:53:39.190560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.564 [2024-11-26 17:53:39.190613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:57.564 [2024-11-26 17:53:39.190653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.564 [2024-11-26 17:53:39.192965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.564 [2024-11-26 17:53:39.193052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:57.564 pt2 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.564 malloc3 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.564 [2024-11-26 17:53:39.259823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:57.564 [2024-11-26 17:53:39.259943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.564 [2024-11-26 17:53:39.259991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:57.564 [2024-11-26 17:53:39.260042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.564 [2024-11-26 17:53:39.262412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.564 [2024-11-26 17:53:39.262502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:57.564 pt3 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.564 [2024-11-26 17:53:39.271881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:57.564 [2024-11-26 17:53:39.273908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:57.564 [2024-11-26 17:53:39.274041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:57.564 [2024-11-26 17:53:39.274248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:57.564 [2024-11-26 17:53:39.274298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:57.564 [2024-11-26 17:53:39.274576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:57.564 [2024-11-26 17:53:39.274798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:57.564 [2024-11-26 17:53:39.274842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:57.564 [2024-11-26 17:53:39.275074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.564 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.564 "name": "raid_bdev1", 00:08:57.564 "uuid": "32020894-e788-4ff9-8ea7-b1cc5b3d0b93", 00:08:57.564 "strip_size_kb": 64, 00:08:57.564 "state": "online", 00:08:57.564 "raid_level": "raid0", 00:08:57.564 "superblock": true, 00:08:57.564 "num_base_bdevs": 3, 00:08:57.564 "num_base_bdevs_discovered": 3, 00:08:57.564 "num_base_bdevs_operational": 3, 00:08:57.564 "base_bdevs_list": [ 00:08:57.564 { 00:08:57.564 "name": "pt1", 00:08:57.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.564 "is_configured": true, 00:08:57.564 "data_offset": 2048, 00:08:57.564 "data_size": 63488 00:08:57.564 }, 00:08:57.564 { 00:08:57.564 "name": "pt2", 00:08:57.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.564 "is_configured": true, 00:08:57.565 "data_offset": 2048, 00:08:57.565 "data_size": 63488 00:08:57.565 }, 00:08:57.565 { 00:08:57.565 "name": "pt3", 00:08:57.565 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:57.565 "is_configured": true, 00:08:57.565 "data_offset": 2048, 00:08:57.565 "data_size": 63488 00:08:57.565 } 00:08:57.565 ] 00:08:57.565 }' 00:08:57.565 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.565 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.133 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:58.133 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:58.133 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:58.133 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:58.133 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:58.133 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:58.133 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:58.133 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:58.133 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.133 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.133 [2024-11-26 17:53:39.723433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.133 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.133 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:58.133 "name": "raid_bdev1", 00:08:58.133 "aliases": [ 00:08:58.133 "32020894-e788-4ff9-8ea7-b1cc5b3d0b93" 00:08:58.133 ], 00:08:58.133 "product_name": "Raid Volume", 00:08:58.133 "block_size": 512, 00:08:58.133 "num_blocks": 190464, 00:08:58.133 "uuid": "32020894-e788-4ff9-8ea7-b1cc5b3d0b93", 00:08:58.133 "assigned_rate_limits": { 00:08:58.133 "rw_ios_per_sec": 0, 00:08:58.133 "rw_mbytes_per_sec": 0, 00:08:58.133 "r_mbytes_per_sec": 0, 00:08:58.133 "w_mbytes_per_sec": 0 00:08:58.133 }, 00:08:58.133 "claimed": false, 00:08:58.133 "zoned": false, 00:08:58.133 "supported_io_types": { 00:08:58.133 "read": true, 00:08:58.133 "write": true, 00:08:58.133 "unmap": true, 00:08:58.133 "flush": true, 00:08:58.133 "reset": true, 00:08:58.133 "nvme_admin": false, 00:08:58.133 "nvme_io": false, 00:08:58.133 "nvme_io_md": false, 00:08:58.133 "write_zeroes": true, 00:08:58.133 "zcopy": false, 00:08:58.133 "get_zone_info": false, 00:08:58.133 "zone_management": false, 00:08:58.133 "zone_append": false, 00:08:58.133 "compare": false, 00:08:58.133 "compare_and_write": false, 00:08:58.133 "abort": false, 00:08:58.133 "seek_hole": false, 00:08:58.133 "seek_data": false, 00:08:58.133 "copy": false, 00:08:58.133 "nvme_iov_md": false 00:08:58.133 }, 00:08:58.133 "memory_domains": [ 00:08:58.133 { 00:08:58.133 "dma_device_id": "system", 00:08:58.133 "dma_device_type": 1 00:08:58.133 }, 00:08:58.133 { 00:08:58.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.133 "dma_device_type": 2 00:08:58.133 }, 00:08:58.133 { 00:08:58.133 "dma_device_id": "system", 00:08:58.133 "dma_device_type": 1 00:08:58.133 }, 00:08:58.133 { 00:08:58.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.133 "dma_device_type": 2 00:08:58.133 }, 00:08:58.133 { 00:08:58.133 "dma_device_id": "system", 00:08:58.133 "dma_device_type": 1 00:08:58.133 }, 00:08:58.133 { 00:08:58.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.133 "dma_device_type": 2 00:08:58.133 } 00:08:58.133 ], 00:08:58.133 "driver_specific": { 00:08:58.133 "raid": { 00:08:58.133 "uuid": "32020894-e788-4ff9-8ea7-b1cc5b3d0b93", 00:08:58.133 "strip_size_kb": 64, 00:08:58.133 "state": "online", 00:08:58.133 "raid_level": "raid0", 00:08:58.133 "superblock": true, 00:08:58.133 "num_base_bdevs": 3, 00:08:58.133 "num_base_bdevs_discovered": 3, 00:08:58.133 "num_base_bdevs_operational": 3, 00:08:58.133 "base_bdevs_list": [ 00:08:58.133 { 00:08:58.133 "name": "pt1", 00:08:58.133 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.133 "is_configured": true, 00:08:58.133 "data_offset": 2048, 00:08:58.133 "data_size": 63488 00:08:58.133 }, 00:08:58.133 { 00:08:58.133 "name": "pt2", 00:08:58.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.133 "is_configured": true, 00:08:58.133 "data_offset": 2048, 00:08:58.133 "data_size": 63488 00:08:58.133 }, 00:08:58.133 { 00:08:58.133 "name": "pt3", 00:08:58.133 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.133 "is_configured": true, 00:08:58.133 "data_offset": 2048, 00:08:58.133 "data_size": 63488 00:08:58.133 } 00:08:58.134 ] 00:08:58.134 } 00:08:58.134 } 00:08:58.134 }' 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:58.134 pt2 00:08:58.134 pt3' 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.134 17:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.394 [2024-11-26 17:53:39.994953] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=32020894-e788-4ff9-8ea7-b1cc5b3d0b93 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 32020894-e788-4ff9-8ea7-b1cc5b3d0b93 ']' 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.394 [2024-11-26 17:53:40.038563] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.394 [2024-11-26 17:53:40.038669] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.394 [2024-11-26 17:53:40.038796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.394 [2024-11-26 17:53:40.038903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.394 [2024-11-26 17:53:40.038954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.394 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.394 [2024-11-26 17:53:40.178374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:58.394 [2024-11-26 17:53:40.180461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:58.394 [2024-11-26 17:53:40.180577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:58.394 [2024-11-26 17:53:40.180661] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:58.394 [2024-11-26 17:53:40.180759] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:58.394 [2024-11-26 17:53:40.180827] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:58.395 [2024-11-26 17:53:40.180927] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.395 [2024-11-26 17:53:40.180967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:58.395 request: 00:08:58.395 { 00:08:58.395 "name": "raid_bdev1", 00:08:58.395 "raid_level": "raid0", 00:08:58.395 "base_bdevs": [ 00:08:58.395 "malloc1", 00:08:58.395 "malloc2", 00:08:58.395 "malloc3" 00:08:58.395 ], 00:08:58.395 "strip_size_kb": 64, 00:08:58.395 "superblock": false, 00:08:58.395 "method": "bdev_raid_create", 00:08:58.395 "req_id": 1 00:08:58.395 } 00:08:58.395 Got JSON-RPC error response 00:08:58.395 response: 00:08:58.395 { 00:08:58.395 "code": -17, 00:08:58.395 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:58.395 } 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.395 [2024-11-26 17:53:40.234219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:58.395 [2024-11-26 17:53:40.234331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.395 [2024-11-26 17:53:40.234372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:58.395 [2024-11-26 17:53:40.234400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.395 [2024-11-26 17:53:40.236784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.395 [2024-11-26 17:53:40.236884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:58.395 [2024-11-26 17:53:40.237013] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:58.395 [2024-11-26 17:53:40.237121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:58.395 pt1 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.395 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.669 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.669 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.669 "name": "raid_bdev1", 00:08:58.669 "uuid": "32020894-e788-4ff9-8ea7-b1cc5b3d0b93", 00:08:58.669 "strip_size_kb": 64, 00:08:58.669 "state": "configuring", 00:08:58.669 "raid_level": "raid0", 00:08:58.669 "superblock": true, 00:08:58.669 "num_base_bdevs": 3, 00:08:58.669 "num_base_bdevs_discovered": 1, 00:08:58.669 "num_base_bdevs_operational": 3, 00:08:58.669 "base_bdevs_list": [ 00:08:58.669 { 00:08:58.669 "name": "pt1", 00:08:58.669 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.669 "is_configured": true, 00:08:58.669 "data_offset": 2048, 00:08:58.669 "data_size": 63488 00:08:58.669 }, 00:08:58.669 { 00:08:58.669 "name": null, 00:08:58.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.669 "is_configured": false, 00:08:58.669 "data_offset": 2048, 00:08:58.669 "data_size": 63488 00:08:58.669 }, 00:08:58.669 { 00:08:58.669 "name": null, 00:08:58.669 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.669 "is_configured": false, 00:08:58.669 "data_offset": 2048, 00:08:58.669 "data_size": 63488 00:08:58.669 } 00:08:58.669 ] 00:08:58.669 }' 00:08:58.669 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.669 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.930 [2024-11-26 17:53:40.709432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:58.930 [2024-11-26 17:53:40.709571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.930 [2024-11-26 17:53:40.709610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:58.930 [2024-11-26 17:53:40.709621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.930 [2024-11-26 17:53:40.710130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.930 [2024-11-26 17:53:40.710151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:58.930 [2024-11-26 17:53:40.710250] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:58.930 [2024-11-26 17:53:40.710281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:58.930 pt2 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.930 [2024-11-26 17:53:40.721421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.930 "name": "raid_bdev1", 00:08:58.930 "uuid": "32020894-e788-4ff9-8ea7-b1cc5b3d0b93", 00:08:58.930 "strip_size_kb": 64, 00:08:58.930 "state": "configuring", 00:08:58.930 "raid_level": "raid0", 00:08:58.930 "superblock": true, 00:08:58.930 "num_base_bdevs": 3, 00:08:58.930 "num_base_bdevs_discovered": 1, 00:08:58.930 "num_base_bdevs_operational": 3, 00:08:58.930 "base_bdevs_list": [ 00:08:58.930 { 00:08:58.930 "name": "pt1", 00:08:58.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.930 "is_configured": true, 00:08:58.930 "data_offset": 2048, 00:08:58.930 "data_size": 63488 00:08:58.930 }, 00:08:58.930 { 00:08:58.930 "name": null, 00:08:58.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.930 "is_configured": false, 00:08:58.930 "data_offset": 0, 00:08:58.930 "data_size": 63488 00:08:58.930 }, 00:08:58.930 { 00:08:58.930 "name": null, 00:08:58.930 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.930 "is_configured": false, 00:08:58.930 "data_offset": 2048, 00:08:58.930 "data_size": 63488 00:08:58.930 } 00:08:58.930 ] 00:08:58.930 }' 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.930 17:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.500 [2024-11-26 17:53:41.212651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:59.500 [2024-11-26 17:53:41.212801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.500 [2024-11-26 17:53:41.212843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:59.500 [2024-11-26 17:53:41.212902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.500 [2024-11-26 17:53:41.213471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.500 [2024-11-26 17:53:41.213554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:59.500 [2024-11-26 17:53:41.213692] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:59.500 [2024-11-26 17:53:41.213753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.500 pt2 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.500 [2024-11-26 17:53:41.224604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:59.500 [2024-11-26 17:53:41.224717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.500 [2024-11-26 17:53:41.224755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:59.500 [2024-11-26 17:53:41.224788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.500 [2024-11-26 17:53:41.225315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.500 [2024-11-26 17:53:41.225391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:59.500 [2024-11-26 17:53:41.225508] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:59.500 [2024-11-26 17:53:41.225568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:59.500 [2024-11-26 17:53:41.225754] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.500 [2024-11-26 17:53:41.225801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.500 [2024-11-26 17:53:41.226136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:59.500 [2024-11-26 17:53:41.226358] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.500 [2024-11-26 17:53:41.226400] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:59.500 [2024-11-26 17:53:41.226601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.500 pt3 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.500 "name": "raid_bdev1", 00:08:59.500 "uuid": "32020894-e788-4ff9-8ea7-b1cc5b3d0b93", 00:08:59.500 "strip_size_kb": 64, 00:08:59.500 "state": "online", 00:08:59.500 "raid_level": "raid0", 00:08:59.500 "superblock": true, 00:08:59.500 "num_base_bdevs": 3, 00:08:59.500 "num_base_bdevs_discovered": 3, 00:08:59.500 "num_base_bdevs_operational": 3, 00:08:59.500 "base_bdevs_list": [ 00:08:59.500 { 00:08:59.500 "name": "pt1", 00:08:59.500 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.500 "is_configured": true, 00:08:59.500 "data_offset": 2048, 00:08:59.500 "data_size": 63488 00:08:59.500 }, 00:08:59.500 { 00:08:59.500 "name": "pt2", 00:08:59.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.500 "is_configured": true, 00:08:59.500 "data_offset": 2048, 00:08:59.500 "data_size": 63488 00:08:59.500 }, 00:08:59.500 { 00:08:59.500 "name": "pt3", 00:08:59.500 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.500 "is_configured": true, 00:08:59.500 "data_offset": 2048, 00:08:59.500 "data_size": 63488 00:08:59.500 } 00:08:59.500 ] 00:08:59.500 }' 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.500 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.070 [2024-11-26 17:53:41.688200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.070 "name": "raid_bdev1", 00:09:00.070 "aliases": [ 00:09:00.070 "32020894-e788-4ff9-8ea7-b1cc5b3d0b93" 00:09:00.070 ], 00:09:00.070 "product_name": "Raid Volume", 00:09:00.070 "block_size": 512, 00:09:00.070 "num_blocks": 190464, 00:09:00.070 "uuid": "32020894-e788-4ff9-8ea7-b1cc5b3d0b93", 00:09:00.070 "assigned_rate_limits": { 00:09:00.070 "rw_ios_per_sec": 0, 00:09:00.070 "rw_mbytes_per_sec": 0, 00:09:00.070 "r_mbytes_per_sec": 0, 00:09:00.070 "w_mbytes_per_sec": 0 00:09:00.070 }, 00:09:00.070 "claimed": false, 00:09:00.070 "zoned": false, 00:09:00.070 "supported_io_types": { 00:09:00.070 "read": true, 00:09:00.070 "write": true, 00:09:00.070 "unmap": true, 00:09:00.070 "flush": true, 00:09:00.070 "reset": true, 00:09:00.070 "nvme_admin": false, 00:09:00.070 "nvme_io": false, 00:09:00.070 "nvme_io_md": false, 00:09:00.070 "write_zeroes": true, 00:09:00.070 "zcopy": false, 00:09:00.070 "get_zone_info": false, 00:09:00.070 "zone_management": false, 00:09:00.070 "zone_append": false, 00:09:00.070 "compare": false, 00:09:00.070 "compare_and_write": false, 00:09:00.070 "abort": false, 00:09:00.070 "seek_hole": false, 00:09:00.070 "seek_data": false, 00:09:00.070 "copy": false, 00:09:00.070 "nvme_iov_md": false 00:09:00.070 }, 00:09:00.070 "memory_domains": [ 00:09:00.070 { 00:09:00.070 "dma_device_id": "system", 00:09:00.070 "dma_device_type": 1 00:09:00.070 }, 00:09:00.070 { 00:09:00.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.070 "dma_device_type": 2 00:09:00.070 }, 00:09:00.070 { 00:09:00.070 "dma_device_id": "system", 00:09:00.070 "dma_device_type": 1 00:09:00.070 }, 00:09:00.070 { 00:09:00.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.070 "dma_device_type": 2 00:09:00.070 }, 00:09:00.070 { 00:09:00.070 "dma_device_id": "system", 00:09:00.070 "dma_device_type": 1 00:09:00.070 }, 00:09:00.070 { 00:09:00.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.070 "dma_device_type": 2 00:09:00.070 } 00:09:00.070 ], 00:09:00.070 "driver_specific": { 00:09:00.070 "raid": { 00:09:00.070 "uuid": "32020894-e788-4ff9-8ea7-b1cc5b3d0b93", 00:09:00.070 "strip_size_kb": 64, 00:09:00.070 "state": "online", 00:09:00.070 "raid_level": "raid0", 00:09:00.070 "superblock": true, 00:09:00.070 "num_base_bdevs": 3, 00:09:00.070 "num_base_bdevs_discovered": 3, 00:09:00.070 "num_base_bdevs_operational": 3, 00:09:00.070 "base_bdevs_list": [ 00:09:00.070 { 00:09:00.070 "name": "pt1", 00:09:00.070 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.070 "is_configured": true, 00:09:00.070 "data_offset": 2048, 00:09:00.070 "data_size": 63488 00:09:00.070 }, 00:09:00.070 { 00:09:00.070 "name": "pt2", 00:09:00.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.070 "is_configured": true, 00:09:00.070 "data_offset": 2048, 00:09:00.070 "data_size": 63488 00:09:00.070 }, 00:09:00.070 { 00:09:00.070 "name": "pt3", 00:09:00.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:00.070 "is_configured": true, 00:09:00.070 "data_offset": 2048, 00:09:00.070 "data_size": 63488 00:09:00.070 } 00:09:00.070 ] 00:09:00.070 } 00:09:00.070 } 00:09:00.070 }' 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:00.070 pt2 00:09:00.070 pt3' 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.070 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.330 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.330 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.330 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.330 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:00.330 17:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:00.330 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.330 17:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.330 [2024-11-26 17:53:41.979642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 32020894-e788-4ff9-8ea7-b1cc5b3d0b93 '!=' 32020894-e788-4ff9-8ea7-b1cc5b3d0b93 ']' 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65266 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65266 ']' 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65266 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65266 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.330 killing process with pid 65266 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65266' 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65266 00:09:00.330 [2024-11-26 17:53:42.061248] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.330 [2024-11-26 17:53:42.061388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.330 17:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65266 00:09:00.330 [2024-11-26 17:53:42.061457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.330 [2024-11-26 17:53:42.061472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:00.588 [2024-11-26 17:53:42.378593] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.054 ************************************ 00:09:02.054 END TEST raid_superblock_test 00:09:02.054 ************************************ 00:09:02.054 17:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:02.054 00:09:02.054 real 0m5.547s 00:09:02.054 user 0m7.957s 00:09:02.054 sys 0m0.910s 00:09:02.054 17:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.054 17:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.054 17:53:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:02.054 17:53:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:02.054 17:53:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.054 17:53:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.054 ************************************ 00:09:02.054 START TEST raid_read_error_test 00:09:02.054 ************************************ 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ayPT0FBZJz 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65519 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65519 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65519 ']' 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.054 17:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.054 [2024-11-26 17:53:43.781998] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:09:02.054 [2024-11-26 17:53:43.782211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65519 ] 00:09:02.314 [2024-11-26 17:53:43.957494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.314 [2024-11-26 17:53:44.089045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.572 [2024-11-26 17:53:44.304344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.572 [2024-11-26 17:53:44.304511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.829 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.829 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:02.829 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.829 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:02.829 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.829 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.088 BaseBdev1_malloc 00:09:03.088 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.088 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:03.088 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.088 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.088 true 00:09:03.088 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.088 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:03.088 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.088 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.088 [2024-11-26 17:53:44.721654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:03.088 [2024-11-26 17:53:44.721730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.088 [2024-11-26 17:53:44.721764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:03.088 [2024-11-26 17:53:44.721781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.088 [2024-11-26 17:53:44.724393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.088 [2024-11-26 17:53:44.724444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:03.088 BaseBdev1 00:09:03.088 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.088 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.088 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:03.088 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.088 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.088 BaseBdev2_malloc 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.089 true 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.089 [2024-11-26 17:53:44.793362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:03.089 [2024-11-26 17:53:44.793431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.089 [2024-11-26 17:53:44.793454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:03.089 [2024-11-26 17:53:44.793466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.089 [2024-11-26 17:53:44.795858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.089 [2024-11-26 17:53:44.795907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:03.089 BaseBdev2 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.089 BaseBdev3_malloc 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.089 true 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.089 [2024-11-26 17:53:44.871385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:03.089 [2024-11-26 17:53:44.871460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.089 [2024-11-26 17:53:44.871485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:03.089 [2024-11-26 17:53:44.871496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.089 [2024-11-26 17:53:44.874065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.089 [2024-11-26 17:53:44.874113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:03.089 BaseBdev3 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.089 [2024-11-26 17:53:44.883535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.089 [2024-11-26 17:53:44.886329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.089 [2024-11-26 17:53:44.886430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.089 [2024-11-26 17:53:44.886682] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:03.089 [2024-11-26 17:53:44.886699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:03.089 [2024-11-26 17:53:44.887011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:03.089 [2024-11-26 17:53:44.887223] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:03.089 [2024-11-26 17:53:44.887239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:03.089 [2024-11-26 17:53:44.887481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.089 "name": "raid_bdev1", 00:09:03.089 "uuid": "d5086e63-e7d5-4544-8591-786e0f32e34d", 00:09:03.089 "strip_size_kb": 64, 00:09:03.089 "state": "online", 00:09:03.089 "raid_level": "raid0", 00:09:03.089 "superblock": true, 00:09:03.089 "num_base_bdevs": 3, 00:09:03.089 "num_base_bdevs_discovered": 3, 00:09:03.089 "num_base_bdevs_operational": 3, 00:09:03.089 "base_bdevs_list": [ 00:09:03.089 { 00:09:03.089 "name": "BaseBdev1", 00:09:03.089 "uuid": "63a97822-a6ea-56f7-ab3a-1b98a6755cef", 00:09:03.089 "is_configured": true, 00:09:03.089 "data_offset": 2048, 00:09:03.089 "data_size": 63488 00:09:03.089 }, 00:09:03.089 { 00:09:03.089 "name": "BaseBdev2", 00:09:03.089 "uuid": "81554708-9604-58a6-a429-f3fc8348dada", 00:09:03.089 "is_configured": true, 00:09:03.089 "data_offset": 2048, 00:09:03.089 "data_size": 63488 00:09:03.089 }, 00:09:03.089 { 00:09:03.089 "name": "BaseBdev3", 00:09:03.089 "uuid": "70b5df92-d53c-570a-9520-9c8bce129e22", 00:09:03.089 "is_configured": true, 00:09:03.089 "data_offset": 2048, 00:09:03.089 "data_size": 63488 00:09:03.089 } 00:09:03.089 ] 00:09:03.089 }' 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.089 17:53:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.657 17:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:03.657 17:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:03.657 [2024-11-26 17:53:45.439928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.594 "name": "raid_bdev1", 00:09:04.594 "uuid": "d5086e63-e7d5-4544-8591-786e0f32e34d", 00:09:04.594 "strip_size_kb": 64, 00:09:04.594 "state": "online", 00:09:04.594 "raid_level": "raid0", 00:09:04.594 "superblock": true, 00:09:04.594 "num_base_bdevs": 3, 00:09:04.594 "num_base_bdevs_discovered": 3, 00:09:04.594 "num_base_bdevs_operational": 3, 00:09:04.594 "base_bdevs_list": [ 00:09:04.594 { 00:09:04.594 "name": "BaseBdev1", 00:09:04.594 "uuid": "63a97822-a6ea-56f7-ab3a-1b98a6755cef", 00:09:04.594 "is_configured": true, 00:09:04.594 "data_offset": 2048, 00:09:04.594 "data_size": 63488 00:09:04.594 }, 00:09:04.594 { 00:09:04.594 "name": "BaseBdev2", 00:09:04.594 "uuid": "81554708-9604-58a6-a429-f3fc8348dada", 00:09:04.594 "is_configured": true, 00:09:04.594 "data_offset": 2048, 00:09:04.594 "data_size": 63488 00:09:04.594 }, 00:09:04.594 { 00:09:04.594 "name": "BaseBdev3", 00:09:04.594 "uuid": "70b5df92-d53c-570a-9520-9c8bce129e22", 00:09:04.594 "is_configured": true, 00:09:04.594 "data_offset": 2048, 00:09:04.594 "data_size": 63488 00:09:04.594 } 00:09:04.594 ] 00:09:04.594 }' 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.594 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.162 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:05.162 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.162 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.162 [2024-11-26 17:53:46.812266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.162 [2024-11-26 17:53:46.812368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.162 [2024-11-26 17:53:46.815450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.162 [2024-11-26 17:53:46.815553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.162 [2024-11-26 17:53:46.815630] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.162 [2024-11-26 17:53:46.815703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:05.162 { 00:09:05.162 "results": [ 00:09:05.162 { 00:09:05.162 "job": "raid_bdev1", 00:09:05.162 "core_mask": "0x1", 00:09:05.162 "workload": "randrw", 00:09:05.162 "percentage": 50, 00:09:05.162 "status": "finished", 00:09:05.162 "queue_depth": 1, 00:09:05.162 "io_size": 131072, 00:09:05.162 "runtime": 1.373088, 00:09:05.162 "iops": 14055.909016756392, 00:09:05.162 "mibps": 1756.988627094549, 00:09:05.162 "io_failed": 1, 00:09:05.162 "io_timeout": 0, 00:09:05.162 "avg_latency_us": 98.46262489736827, 00:09:05.162 "min_latency_us": 24.593886462882097, 00:09:05.162 "max_latency_us": 1445.2262008733624 00:09:05.162 } 00:09:05.162 ], 00:09:05.162 "core_count": 1 00:09:05.162 } 00:09:05.162 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.162 17:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65519 00:09:05.162 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65519 ']' 00:09:05.162 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65519 00:09:05.162 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:05.162 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.162 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65519 00:09:05.162 killing process with pid 65519 00:09:05.162 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.162 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.162 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65519' 00:09:05.162 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65519 00:09:05.162 [2024-11-26 17:53:46.856290] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.162 17:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65519 00:09:05.421 [2024-11-26 17:53:47.111213] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:06.799 17:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:06.799 17:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ayPT0FBZJz 00:09:06.799 17:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:06.799 17:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:06.799 17:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:06.799 17:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.799 17:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:06.799 17:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:06.799 00:09:06.799 real 0m4.717s 00:09:06.799 user 0m5.646s 00:09:06.799 sys 0m0.516s 00:09:06.799 ************************************ 00:09:06.799 END TEST raid_read_error_test 00:09:06.799 ************************************ 00:09:06.799 17:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.799 17:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.799 17:53:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:06.799 17:53:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:06.799 17:53:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.799 17:53:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:06.799 ************************************ 00:09:06.799 START TEST raid_write_error_test 00:09:06.799 ************************************ 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fDJgXi9Soz 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65670 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65670 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65670 ']' 00:09:06.799 17:53:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.800 17:53:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.800 17:53:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.800 17:53:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.800 17:53:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.800 [2024-11-26 17:53:48.562626] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:09:06.800 [2024-11-26 17:53:48.562847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65670 ] 00:09:07.059 [2024-11-26 17:53:48.738033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.059 [2024-11-26 17:53:48.862015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.318 [2024-11-26 17:53:49.079054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.318 [2024-11-26 17:53:49.079210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.885 BaseBdev1_malloc 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.885 true 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.885 [2024-11-26 17:53:49.527224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:07.885 [2024-11-26 17:53:49.527352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.885 [2024-11-26 17:53:49.527419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:07.885 [2024-11-26 17:53:49.527458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.885 [2024-11-26 17:53:49.529930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.885 [2024-11-26 17:53:49.530030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:07.885 BaseBdev1 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.885 BaseBdev2_malloc 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.885 true 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.885 [2024-11-26 17:53:49.596875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:07.885 [2024-11-26 17:53:49.596958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.885 [2024-11-26 17:53:49.596979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:07.885 [2024-11-26 17:53:49.596991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.885 [2024-11-26 17:53:49.599215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.885 [2024-11-26 17:53:49.599255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:07.885 BaseBdev2 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:07.885 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 BaseBdev3_malloc 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 true 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 [2024-11-26 17:53:49.678946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:07.886 [2024-11-26 17:53:49.679014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.886 [2024-11-26 17:53:49.679050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:07.886 [2024-11-26 17:53:49.679061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.886 [2024-11-26 17:53:49.681438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.886 [2024-11-26 17:53:49.681549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:07.886 BaseBdev3 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 [2024-11-26 17:53:49.691000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.886 [2024-11-26 17:53:49.692839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.886 [2024-11-26 17:53:49.693007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.886 [2024-11-26 17:53:49.693251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:07.886 [2024-11-26 17:53:49.693269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:07.886 [2024-11-26 17:53:49.693562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:07.886 [2024-11-26 17:53:49.693747] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:07.886 [2024-11-26 17:53:49.693762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:07.886 [2024-11-26 17:53:49.693942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.886 "name": "raid_bdev1", 00:09:07.886 "uuid": "b47c4d84-a988-4fdb-88c4-b39cd37a4759", 00:09:07.886 "strip_size_kb": 64, 00:09:07.886 "state": "online", 00:09:07.886 "raid_level": "raid0", 00:09:07.886 "superblock": true, 00:09:07.886 "num_base_bdevs": 3, 00:09:07.886 "num_base_bdevs_discovered": 3, 00:09:07.886 "num_base_bdevs_operational": 3, 00:09:07.886 "base_bdevs_list": [ 00:09:07.886 { 00:09:07.886 "name": "BaseBdev1", 00:09:07.886 "uuid": "b8181631-0a86-5015-a4eb-e98d4c7dcfdf", 00:09:07.886 "is_configured": true, 00:09:07.886 "data_offset": 2048, 00:09:07.886 "data_size": 63488 00:09:07.886 }, 00:09:07.886 { 00:09:07.886 "name": "BaseBdev2", 00:09:07.886 "uuid": "ff7712cb-52ca-58a3-8f8c-86922959bd5e", 00:09:07.886 "is_configured": true, 00:09:07.886 "data_offset": 2048, 00:09:07.886 "data_size": 63488 00:09:07.886 }, 00:09:07.886 { 00:09:07.886 "name": "BaseBdev3", 00:09:07.886 "uuid": "52532c6c-cef6-5394-948e-da986acbf696", 00:09:07.886 "is_configured": true, 00:09:07.886 "data_offset": 2048, 00:09:07.886 "data_size": 63488 00:09:07.886 } 00:09:07.886 ] 00:09:07.886 }' 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.886 17:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.454 17:53:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:08.454 17:53:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:08.454 [2024-11-26 17:53:50.219400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.493 "name": "raid_bdev1", 00:09:09.493 "uuid": "b47c4d84-a988-4fdb-88c4-b39cd37a4759", 00:09:09.493 "strip_size_kb": 64, 00:09:09.493 "state": "online", 00:09:09.493 "raid_level": "raid0", 00:09:09.493 "superblock": true, 00:09:09.493 "num_base_bdevs": 3, 00:09:09.493 "num_base_bdevs_discovered": 3, 00:09:09.493 "num_base_bdevs_operational": 3, 00:09:09.493 "base_bdevs_list": [ 00:09:09.493 { 00:09:09.493 "name": "BaseBdev1", 00:09:09.493 "uuid": "b8181631-0a86-5015-a4eb-e98d4c7dcfdf", 00:09:09.493 "is_configured": true, 00:09:09.493 "data_offset": 2048, 00:09:09.493 "data_size": 63488 00:09:09.493 }, 00:09:09.493 { 00:09:09.493 "name": "BaseBdev2", 00:09:09.493 "uuid": "ff7712cb-52ca-58a3-8f8c-86922959bd5e", 00:09:09.493 "is_configured": true, 00:09:09.493 "data_offset": 2048, 00:09:09.493 "data_size": 63488 00:09:09.493 }, 00:09:09.493 { 00:09:09.493 "name": "BaseBdev3", 00:09:09.493 "uuid": "52532c6c-cef6-5394-948e-da986acbf696", 00:09:09.493 "is_configured": true, 00:09:09.493 "data_offset": 2048, 00:09:09.493 "data_size": 63488 00:09:09.493 } 00:09:09.493 ] 00:09:09.493 }' 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.493 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.752 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:09.752 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.752 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.752 [2024-11-26 17:53:51.535493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.752 [2024-11-26 17:53:51.535536] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.752 [2024-11-26 17:53:51.538947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.752 [2024-11-26 17:53:51.539065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.752 [2024-11-26 17:53:51.539140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.752 [2024-11-26 17:53:51.539191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:09.752 { 00:09:09.752 "results": [ 00:09:09.752 { 00:09:09.752 "job": "raid_bdev1", 00:09:09.752 "core_mask": "0x1", 00:09:09.752 "workload": "randrw", 00:09:09.752 "percentage": 50, 00:09:09.752 "status": "finished", 00:09:09.752 "queue_depth": 1, 00:09:09.752 "io_size": 131072, 00:09:09.752 "runtime": 1.316924, 00:09:09.752 "iops": 14631.823856198233, 00:09:09.752 "mibps": 1828.977982024779, 00:09:09.752 "io_failed": 1, 00:09:09.752 "io_timeout": 0, 00:09:09.752 "avg_latency_us": 94.66994377757584, 00:09:09.752 "min_latency_us": 21.463755458515283, 00:09:09.752 "max_latency_us": 1416.6078602620087 00:09:09.752 } 00:09:09.752 ], 00:09:09.752 "core_count": 1 00:09:09.752 } 00:09:09.752 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.752 17:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65670 00:09:09.752 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65670 ']' 00:09:09.752 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65670 00:09:09.752 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:09.752 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.752 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65670 00:09:09.752 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.752 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.752 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65670' 00:09:09.752 killing process with pid 65670 00:09:09.752 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65670 00:09:09.752 [2024-11-26 17:53:51.583914] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:09.752 17:53:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65670 00:09:10.011 [2024-11-26 17:53:51.827646] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.384 17:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fDJgXi9Soz 00:09:11.384 17:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:11.384 17:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:11.384 17:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:09:11.384 17:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:11.384 17:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.384 17:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.384 17:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:09:11.384 ************************************ 00:09:11.384 END TEST raid_write_error_test 00:09:11.384 ************************************ 00:09:11.384 00:09:11.384 real 0m4.662s 00:09:11.384 user 0m5.533s 00:09:11.384 sys 0m0.532s 00:09:11.384 17:53:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.384 17:53:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.384 17:53:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:11.384 17:53:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:11.384 17:53:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:11.384 17:53:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.384 17:53:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.384 ************************************ 00:09:11.384 START TEST raid_state_function_test 00:09:11.384 ************************************ 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65808 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65808' 00:09:11.384 Process raid pid: 65808 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65808 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65808 ']' 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.384 17:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.641 [2024-11-26 17:53:53.285284] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:09:11.641 [2024-11-26 17:53:53.285506] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.641 [2024-11-26 17:53:53.441240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.897 [2024-11-26 17:53:53.561562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.156 [2024-11-26 17:53:53.776218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.156 [2024-11-26 17:53:53.776263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.415 [2024-11-26 17:53:54.146195] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.415 [2024-11-26 17:53:54.146260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.415 [2024-11-26 17:53:54.146273] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:12.415 [2024-11-26 17:53:54.146284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:12.415 [2024-11-26 17:53:54.146292] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:12.415 [2024-11-26 17:53:54.146302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.415 "name": "Existed_Raid", 00:09:12.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.415 "strip_size_kb": 64, 00:09:12.415 "state": "configuring", 00:09:12.415 "raid_level": "concat", 00:09:12.415 "superblock": false, 00:09:12.415 "num_base_bdevs": 3, 00:09:12.415 "num_base_bdevs_discovered": 0, 00:09:12.415 "num_base_bdevs_operational": 3, 00:09:12.415 "base_bdevs_list": [ 00:09:12.415 { 00:09:12.415 "name": "BaseBdev1", 00:09:12.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.415 "is_configured": false, 00:09:12.415 "data_offset": 0, 00:09:12.415 "data_size": 0 00:09:12.415 }, 00:09:12.415 { 00:09:12.415 "name": "BaseBdev2", 00:09:12.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.415 "is_configured": false, 00:09:12.415 "data_offset": 0, 00:09:12.415 "data_size": 0 00:09:12.415 }, 00:09:12.415 { 00:09:12.415 "name": "BaseBdev3", 00:09:12.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.415 "is_configured": false, 00:09:12.415 "data_offset": 0, 00:09:12.415 "data_size": 0 00:09:12.415 } 00:09:12.415 ] 00:09:12.415 }' 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.415 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.979 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:12.979 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.979 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.979 [2024-11-26 17:53:54.617311] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.979 [2024-11-26 17:53:54.617429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:12.979 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.979 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.979 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.979 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.979 [2024-11-26 17:53:54.629307] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.979 [2024-11-26 17:53:54.629408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.979 [2024-11-26 17:53:54.629439] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:12.979 [2024-11-26 17:53:54.629464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:12.979 [2024-11-26 17:53:54.629484] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:12.979 [2024-11-26 17:53:54.629506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:12.979 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.979 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:12.979 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.979 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.980 [2024-11-26 17:53:54.679562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:12.980 BaseBdev1 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.980 [ 00:09:12.980 { 00:09:12.980 "name": "BaseBdev1", 00:09:12.980 "aliases": [ 00:09:12.980 "ed51c875-336a-4e57-a5be-1687209d5c87" 00:09:12.980 ], 00:09:12.980 "product_name": "Malloc disk", 00:09:12.980 "block_size": 512, 00:09:12.980 "num_blocks": 65536, 00:09:12.980 "uuid": "ed51c875-336a-4e57-a5be-1687209d5c87", 00:09:12.980 "assigned_rate_limits": { 00:09:12.980 "rw_ios_per_sec": 0, 00:09:12.980 "rw_mbytes_per_sec": 0, 00:09:12.980 "r_mbytes_per_sec": 0, 00:09:12.980 "w_mbytes_per_sec": 0 00:09:12.980 }, 00:09:12.980 "claimed": true, 00:09:12.980 "claim_type": "exclusive_write", 00:09:12.980 "zoned": false, 00:09:12.980 "supported_io_types": { 00:09:12.980 "read": true, 00:09:12.980 "write": true, 00:09:12.980 "unmap": true, 00:09:12.980 "flush": true, 00:09:12.980 "reset": true, 00:09:12.980 "nvme_admin": false, 00:09:12.980 "nvme_io": false, 00:09:12.980 "nvme_io_md": false, 00:09:12.980 "write_zeroes": true, 00:09:12.980 "zcopy": true, 00:09:12.980 "get_zone_info": false, 00:09:12.980 "zone_management": false, 00:09:12.980 "zone_append": false, 00:09:12.980 "compare": false, 00:09:12.980 "compare_and_write": false, 00:09:12.980 "abort": true, 00:09:12.980 "seek_hole": false, 00:09:12.980 "seek_data": false, 00:09:12.980 "copy": true, 00:09:12.980 "nvme_iov_md": false 00:09:12.980 }, 00:09:12.980 "memory_domains": [ 00:09:12.980 { 00:09:12.980 "dma_device_id": "system", 00:09:12.980 "dma_device_type": 1 00:09:12.980 }, 00:09:12.980 { 00:09:12.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.980 "dma_device_type": 2 00:09:12.980 } 00:09:12.980 ], 00:09:12.980 "driver_specific": {} 00:09:12.980 } 00:09:12.980 ] 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.980 "name": "Existed_Raid", 00:09:12.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.980 "strip_size_kb": 64, 00:09:12.980 "state": "configuring", 00:09:12.980 "raid_level": "concat", 00:09:12.980 "superblock": false, 00:09:12.980 "num_base_bdevs": 3, 00:09:12.980 "num_base_bdevs_discovered": 1, 00:09:12.980 "num_base_bdevs_operational": 3, 00:09:12.980 "base_bdevs_list": [ 00:09:12.980 { 00:09:12.980 "name": "BaseBdev1", 00:09:12.980 "uuid": "ed51c875-336a-4e57-a5be-1687209d5c87", 00:09:12.980 "is_configured": true, 00:09:12.980 "data_offset": 0, 00:09:12.980 "data_size": 65536 00:09:12.980 }, 00:09:12.980 { 00:09:12.980 "name": "BaseBdev2", 00:09:12.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.980 "is_configured": false, 00:09:12.980 "data_offset": 0, 00:09:12.980 "data_size": 0 00:09:12.980 }, 00:09:12.980 { 00:09:12.980 "name": "BaseBdev3", 00:09:12.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.980 "is_configured": false, 00:09:12.980 "data_offset": 0, 00:09:12.980 "data_size": 0 00:09:12.980 } 00:09:12.980 ] 00:09:12.980 }' 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.980 17:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.549 [2024-11-26 17:53:55.154807] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.549 [2024-11-26 17:53:55.154954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.549 [2024-11-26 17:53:55.166856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.549 [2024-11-26 17:53:55.168945] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.549 [2024-11-26 17:53:55.169057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.549 [2024-11-26 17:53:55.169095] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:13.549 [2024-11-26 17:53:55.169122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.549 "name": "Existed_Raid", 00:09:13.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.549 "strip_size_kb": 64, 00:09:13.549 "state": "configuring", 00:09:13.549 "raid_level": "concat", 00:09:13.549 "superblock": false, 00:09:13.549 "num_base_bdevs": 3, 00:09:13.549 "num_base_bdevs_discovered": 1, 00:09:13.549 "num_base_bdevs_operational": 3, 00:09:13.549 "base_bdevs_list": [ 00:09:13.549 { 00:09:13.549 "name": "BaseBdev1", 00:09:13.549 "uuid": "ed51c875-336a-4e57-a5be-1687209d5c87", 00:09:13.549 "is_configured": true, 00:09:13.549 "data_offset": 0, 00:09:13.549 "data_size": 65536 00:09:13.549 }, 00:09:13.549 { 00:09:13.549 "name": "BaseBdev2", 00:09:13.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.549 "is_configured": false, 00:09:13.549 "data_offset": 0, 00:09:13.549 "data_size": 0 00:09:13.549 }, 00:09:13.549 { 00:09:13.549 "name": "BaseBdev3", 00:09:13.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.549 "is_configured": false, 00:09:13.549 "data_offset": 0, 00:09:13.549 "data_size": 0 00:09:13.549 } 00:09:13.549 ] 00:09:13.549 }' 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.549 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.808 [2024-11-26 17:53:55.650834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.808 BaseBdev2 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.808 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.068 [ 00:09:14.068 { 00:09:14.068 "name": "BaseBdev2", 00:09:14.068 "aliases": [ 00:09:14.068 "9d4a95bf-181f-4189-92fd-d6cd2ac43eb7" 00:09:14.068 ], 00:09:14.068 "product_name": "Malloc disk", 00:09:14.068 "block_size": 512, 00:09:14.068 "num_blocks": 65536, 00:09:14.068 "uuid": "9d4a95bf-181f-4189-92fd-d6cd2ac43eb7", 00:09:14.068 "assigned_rate_limits": { 00:09:14.068 "rw_ios_per_sec": 0, 00:09:14.068 "rw_mbytes_per_sec": 0, 00:09:14.068 "r_mbytes_per_sec": 0, 00:09:14.068 "w_mbytes_per_sec": 0 00:09:14.068 }, 00:09:14.068 "claimed": true, 00:09:14.068 "claim_type": "exclusive_write", 00:09:14.068 "zoned": false, 00:09:14.068 "supported_io_types": { 00:09:14.068 "read": true, 00:09:14.068 "write": true, 00:09:14.068 "unmap": true, 00:09:14.068 "flush": true, 00:09:14.068 "reset": true, 00:09:14.068 "nvme_admin": false, 00:09:14.068 "nvme_io": false, 00:09:14.068 "nvme_io_md": false, 00:09:14.068 "write_zeroes": true, 00:09:14.068 "zcopy": true, 00:09:14.068 "get_zone_info": false, 00:09:14.068 "zone_management": false, 00:09:14.068 "zone_append": false, 00:09:14.069 "compare": false, 00:09:14.069 "compare_and_write": false, 00:09:14.069 "abort": true, 00:09:14.069 "seek_hole": false, 00:09:14.069 "seek_data": false, 00:09:14.069 "copy": true, 00:09:14.069 "nvme_iov_md": false 00:09:14.069 }, 00:09:14.069 "memory_domains": [ 00:09:14.069 { 00:09:14.069 "dma_device_id": "system", 00:09:14.069 "dma_device_type": 1 00:09:14.069 }, 00:09:14.069 { 00:09:14.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.069 "dma_device_type": 2 00:09:14.069 } 00:09:14.069 ], 00:09:14.069 "driver_specific": {} 00:09:14.069 } 00:09:14.069 ] 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.069 "name": "Existed_Raid", 00:09:14.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.069 "strip_size_kb": 64, 00:09:14.069 "state": "configuring", 00:09:14.069 "raid_level": "concat", 00:09:14.069 "superblock": false, 00:09:14.069 "num_base_bdevs": 3, 00:09:14.069 "num_base_bdevs_discovered": 2, 00:09:14.069 "num_base_bdevs_operational": 3, 00:09:14.069 "base_bdevs_list": [ 00:09:14.069 { 00:09:14.069 "name": "BaseBdev1", 00:09:14.069 "uuid": "ed51c875-336a-4e57-a5be-1687209d5c87", 00:09:14.069 "is_configured": true, 00:09:14.069 "data_offset": 0, 00:09:14.069 "data_size": 65536 00:09:14.069 }, 00:09:14.069 { 00:09:14.069 "name": "BaseBdev2", 00:09:14.069 "uuid": "9d4a95bf-181f-4189-92fd-d6cd2ac43eb7", 00:09:14.069 "is_configured": true, 00:09:14.069 "data_offset": 0, 00:09:14.069 "data_size": 65536 00:09:14.069 }, 00:09:14.069 { 00:09:14.069 "name": "BaseBdev3", 00:09:14.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.069 "is_configured": false, 00:09:14.069 "data_offset": 0, 00:09:14.069 "data_size": 0 00:09:14.069 } 00:09:14.069 ] 00:09:14.069 }' 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.069 17:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.329 [2024-11-26 17:53:56.120622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.329 [2024-11-26 17:53:56.120765] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:14.329 [2024-11-26 17:53:56.120797] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:14.329 [2024-11-26 17:53:56.121134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:14.329 [2024-11-26 17:53:56.121319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:14.329 [2024-11-26 17:53:56.121331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:14.329 [2024-11-26 17:53:56.121614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.329 BaseBdev3 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.329 [ 00:09:14.329 { 00:09:14.329 "name": "BaseBdev3", 00:09:14.329 "aliases": [ 00:09:14.329 "d6252730-3e09-406f-9ae8-e3b5c21b26f9" 00:09:14.329 ], 00:09:14.329 "product_name": "Malloc disk", 00:09:14.329 "block_size": 512, 00:09:14.329 "num_blocks": 65536, 00:09:14.329 "uuid": "d6252730-3e09-406f-9ae8-e3b5c21b26f9", 00:09:14.329 "assigned_rate_limits": { 00:09:14.329 "rw_ios_per_sec": 0, 00:09:14.329 "rw_mbytes_per_sec": 0, 00:09:14.329 "r_mbytes_per_sec": 0, 00:09:14.329 "w_mbytes_per_sec": 0 00:09:14.329 }, 00:09:14.329 "claimed": true, 00:09:14.329 "claim_type": "exclusive_write", 00:09:14.329 "zoned": false, 00:09:14.329 "supported_io_types": { 00:09:14.329 "read": true, 00:09:14.329 "write": true, 00:09:14.329 "unmap": true, 00:09:14.329 "flush": true, 00:09:14.329 "reset": true, 00:09:14.329 "nvme_admin": false, 00:09:14.329 "nvme_io": false, 00:09:14.329 "nvme_io_md": false, 00:09:14.329 "write_zeroes": true, 00:09:14.329 "zcopy": true, 00:09:14.329 "get_zone_info": false, 00:09:14.329 "zone_management": false, 00:09:14.329 "zone_append": false, 00:09:14.329 "compare": false, 00:09:14.329 "compare_and_write": false, 00:09:14.329 "abort": true, 00:09:14.329 "seek_hole": false, 00:09:14.329 "seek_data": false, 00:09:14.329 "copy": true, 00:09:14.329 "nvme_iov_md": false 00:09:14.329 }, 00:09:14.329 "memory_domains": [ 00:09:14.329 { 00:09:14.329 "dma_device_id": "system", 00:09:14.329 "dma_device_type": 1 00:09:14.329 }, 00:09:14.329 { 00:09:14.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.329 "dma_device_type": 2 00:09:14.329 } 00:09:14.329 ], 00:09:14.329 "driver_specific": {} 00:09:14.329 } 00:09:14.329 ] 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.329 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.588 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.588 "name": "Existed_Raid", 00:09:14.588 "uuid": "d3b1181b-fb8f-4433-92af-dc73b032a877", 00:09:14.588 "strip_size_kb": 64, 00:09:14.588 "state": "online", 00:09:14.588 "raid_level": "concat", 00:09:14.588 "superblock": false, 00:09:14.588 "num_base_bdevs": 3, 00:09:14.588 "num_base_bdevs_discovered": 3, 00:09:14.588 "num_base_bdevs_operational": 3, 00:09:14.588 "base_bdevs_list": [ 00:09:14.588 { 00:09:14.588 "name": "BaseBdev1", 00:09:14.588 "uuid": "ed51c875-336a-4e57-a5be-1687209d5c87", 00:09:14.588 "is_configured": true, 00:09:14.588 "data_offset": 0, 00:09:14.588 "data_size": 65536 00:09:14.588 }, 00:09:14.589 { 00:09:14.589 "name": "BaseBdev2", 00:09:14.589 "uuid": "9d4a95bf-181f-4189-92fd-d6cd2ac43eb7", 00:09:14.589 "is_configured": true, 00:09:14.589 "data_offset": 0, 00:09:14.589 "data_size": 65536 00:09:14.589 }, 00:09:14.589 { 00:09:14.589 "name": "BaseBdev3", 00:09:14.589 "uuid": "d6252730-3e09-406f-9ae8-e3b5c21b26f9", 00:09:14.589 "is_configured": true, 00:09:14.589 "data_offset": 0, 00:09:14.589 "data_size": 65536 00:09:14.589 } 00:09:14.589 ] 00:09:14.589 }' 00:09:14.589 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.589 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.848 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:14.848 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:14.848 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.848 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.848 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.848 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.848 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.848 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:14.848 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.848 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.848 [2024-11-26 17:53:56.628256] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.848 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.848 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.848 "name": "Existed_Raid", 00:09:14.848 "aliases": [ 00:09:14.848 "d3b1181b-fb8f-4433-92af-dc73b032a877" 00:09:14.848 ], 00:09:14.848 "product_name": "Raid Volume", 00:09:14.848 "block_size": 512, 00:09:14.848 "num_blocks": 196608, 00:09:14.848 "uuid": "d3b1181b-fb8f-4433-92af-dc73b032a877", 00:09:14.848 "assigned_rate_limits": { 00:09:14.848 "rw_ios_per_sec": 0, 00:09:14.848 "rw_mbytes_per_sec": 0, 00:09:14.848 "r_mbytes_per_sec": 0, 00:09:14.848 "w_mbytes_per_sec": 0 00:09:14.848 }, 00:09:14.848 "claimed": false, 00:09:14.848 "zoned": false, 00:09:14.848 "supported_io_types": { 00:09:14.848 "read": true, 00:09:14.848 "write": true, 00:09:14.848 "unmap": true, 00:09:14.848 "flush": true, 00:09:14.848 "reset": true, 00:09:14.848 "nvme_admin": false, 00:09:14.848 "nvme_io": false, 00:09:14.848 "nvme_io_md": false, 00:09:14.848 "write_zeroes": true, 00:09:14.848 "zcopy": false, 00:09:14.848 "get_zone_info": false, 00:09:14.848 "zone_management": false, 00:09:14.848 "zone_append": false, 00:09:14.848 "compare": false, 00:09:14.848 "compare_and_write": false, 00:09:14.848 "abort": false, 00:09:14.848 "seek_hole": false, 00:09:14.848 "seek_data": false, 00:09:14.848 "copy": false, 00:09:14.848 "nvme_iov_md": false 00:09:14.848 }, 00:09:14.848 "memory_domains": [ 00:09:14.848 { 00:09:14.848 "dma_device_id": "system", 00:09:14.848 "dma_device_type": 1 00:09:14.848 }, 00:09:14.848 { 00:09:14.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.849 "dma_device_type": 2 00:09:14.849 }, 00:09:14.849 { 00:09:14.849 "dma_device_id": "system", 00:09:14.849 "dma_device_type": 1 00:09:14.849 }, 00:09:14.849 { 00:09:14.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.849 "dma_device_type": 2 00:09:14.849 }, 00:09:14.849 { 00:09:14.849 "dma_device_id": "system", 00:09:14.849 "dma_device_type": 1 00:09:14.849 }, 00:09:14.849 { 00:09:14.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.849 "dma_device_type": 2 00:09:14.849 } 00:09:14.849 ], 00:09:14.849 "driver_specific": { 00:09:14.849 "raid": { 00:09:14.849 "uuid": "d3b1181b-fb8f-4433-92af-dc73b032a877", 00:09:14.849 "strip_size_kb": 64, 00:09:14.849 "state": "online", 00:09:14.849 "raid_level": "concat", 00:09:14.849 "superblock": false, 00:09:14.849 "num_base_bdevs": 3, 00:09:14.849 "num_base_bdevs_discovered": 3, 00:09:14.849 "num_base_bdevs_operational": 3, 00:09:14.849 "base_bdevs_list": [ 00:09:14.849 { 00:09:14.849 "name": "BaseBdev1", 00:09:14.849 "uuid": "ed51c875-336a-4e57-a5be-1687209d5c87", 00:09:14.849 "is_configured": true, 00:09:14.849 "data_offset": 0, 00:09:14.849 "data_size": 65536 00:09:14.849 }, 00:09:14.849 { 00:09:14.849 "name": "BaseBdev2", 00:09:14.849 "uuid": "9d4a95bf-181f-4189-92fd-d6cd2ac43eb7", 00:09:14.849 "is_configured": true, 00:09:14.849 "data_offset": 0, 00:09:14.849 "data_size": 65536 00:09:14.849 }, 00:09:14.849 { 00:09:14.849 "name": "BaseBdev3", 00:09:14.849 "uuid": "d6252730-3e09-406f-9ae8-e3b5c21b26f9", 00:09:14.849 "is_configured": true, 00:09:14.849 "data_offset": 0, 00:09:14.849 "data_size": 65536 00:09:14.849 } 00:09:14.849 ] 00:09:14.849 } 00:09:14.849 } 00:09:14.849 }' 00:09:14.849 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.849 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:14.849 BaseBdev2 00:09:14.849 BaseBdev3' 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.108 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.108 [2024-11-26 17:53:56.895487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.108 [2024-11-26 17:53:56.895519] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.108 [2024-11-26 17:53:56.895577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.368 17:53:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.368 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.368 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.368 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.368 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.368 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.368 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.368 "name": "Existed_Raid", 00:09:15.368 "uuid": "d3b1181b-fb8f-4433-92af-dc73b032a877", 00:09:15.368 "strip_size_kb": 64, 00:09:15.368 "state": "offline", 00:09:15.368 "raid_level": "concat", 00:09:15.368 "superblock": false, 00:09:15.368 "num_base_bdevs": 3, 00:09:15.368 "num_base_bdevs_discovered": 2, 00:09:15.368 "num_base_bdevs_operational": 2, 00:09:15.368 "base_bdevs_list": [ 00:09:15.368 { 00:09:15.368 "name": null, 00:09:15.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.368 "is_configured": false, 00:09:15.368 "data_offset": 0, 00:09:15.368 "data_size": 65536 00:09:15.368 }, 00:09:15.368 { 00:09:15.368 "name": "BaseBdev2", 00:09:15.368 "uuid": "9d4a95bf-181f-4189-92fd-d6cd2ac43eb7", 00:09:15.368 "is_configured": true, 00:09:15.368 "data_offset": 0, 00:09:15.368 "data_size": 65536 00:09:15.368 }, 00:09:15.368 { 00:09:15.368 "name": "BaseBdev3", 00:09:15.368 "uuid": "d6252730-3e09-406f-9ae8-e3b5c21b26f9", 00:09:15.368 "is_configured": true, 00:09:15.368 "data_offset": 0, 00:09:15.368 "data_size": 65536 00:09:15.368 } 00:09:15.368 ] 00:09:15.368 }' 00:09:15.368 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.368 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.628 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:15.628 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:15.628 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.628 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:15.628 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.628 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.628 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.628 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:15.628 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:15.628 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:15.628 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.628 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.628 [2024-11-26 17:53:57.464827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.888 [2024-11-26 17:53:57.624263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:15.888 [2024-11-26 17:53:57.624319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:15.888 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.889 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.889 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.148 BaseBdev2 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.148 [ 00:09:16.148 { 00:09:16.148 "name": "BaseBdev2", 00:09:16.148 "aliases": [ 00:09:16.148 "123ac421-2cbc-4ba1-b459-d84e940815b5" 00:09:16.148 ], 00:09:16.148 "product_name": "Malloc disk", 00:09:16.148 "block_size": 512, 00:09:16.148 "num_blocks": 65536, 00:09:16.148 "uuid": "123ac421-2cbc-4ba1-b459-d84e940815b5", 00:09:16.148 "assigned_rate_limits": { 00:09:16.148 "rw_ios_per_sec": 0, 00:09:16.148 "rw_mbytes_per_sec": 0, 00:09:16.148 "r_mbytes_per_sec": 0, 00:09:16.148 "w_mbytes_per_sec": 0 00:09:16.148 }, 00:09:16.148 "claimed": false, 00:09:16.148 "zoned": false, 00:09:16.148 "supported_io_types": { 00:09:16.148 "read": true, 00:09:16.148 "write": true, 00:09:16.148 "unmap": true, 00:09:16.148 "flush": true, 00:09:16.148 "reset": true, 00:09:16.148 "nvme_admin": false, 00:09:16.148 "nvme_io": false, 00:09:16.148 "nvme_io_md": false, 00:09:16.148 "write_zeroes": true, 00:09:16.148 "zcopy": true, 00:09:16.148 "get_zone_info": false, 00:09:16.148 "zone_management": false, 00:09:16.148 "zone_append": false, 00:09:16.148 "compare": false, 00:09:16.148 "compare_and_write": false, 00:09:16.148 "abort": true, 00:09:16.148 "seek_hole": false, 00:09:16.148 "seek_data": false, 00:09:16.148 "copy": true, 00:09:16.148 "nvme_iov_md": false 00:09:16.148 }, 00:09:16.148 "memory_domains": [ 00:09:16.148 { 00:09:16.148 "dma_device_id": "system", 00:09:16.148 "dma_device_type": 1 00:09:16.148 }, 00:09:16.148 { 00:09:16.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.148 "dma_device_type": 2 00:09:16.148 } 00:09:16.148 ], 00:09:16.148 "driver_specific": {} 00:09:16.148 } 00:09:16.148 ] 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.148 BaseBdev3 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.148 [ 00:09:16.148 { 00:09:16.148 "name": "BaseBdev3", 00:09:16.148 "aliases": [ 00:09:16.148 "7f931035-aba9-4ab6-b407-7bf7c0bfbc2f" 00:09:16.148 ], 00:09:16.148 "product_name": "Malloc disk", 00:09:16.148 "block_size": 512, 00:09:16.148 "num_blocks": 65536, 00:09:16.148 "uuid": "7f931035-aba9-4ab6-b407-7bf7c0bfbc2f", 00:09:16.148 "assigned_rate_limits": { 00:09:16.148 "rw_ios_per_sec": 0, 00:09:16.148 "rw_mbytes_per_sec": 0, 00:09:16.148 "r_mbytes_per_sec": 0, 00:09:16.148 "w_mbytes_per_sec": 0 00:09:16.148 }, 00:09:16.148 "claimed": false, 00:09:16.148 "zoned": false, 00:09:16.148 "supported_io_types": { 00:09:16.148 "read": true, 00:09:16.148 "write": true, 00:09:16.148 "unmap": true, 00:09:16.148 "flush": true, 00:09:16.148 "reset": true, 00:09:16.148 "nvme_admin": false, 00:09:16.148 "nvme_io": false, 00:09:16.148 "nvme_io_md": false, 00:09:16.148 "write_zeroes": true, 00:09:16.148 "zcopy": true, 00:09:16.148 "get_zone_info": false, 00:09:16.148 "zone_management": false, 00:09:16.148 "zone_append": false, 00:09:16.148 "compare": false, 00:09:16.148 "compare_and_write": false, 00:09:16.148 "abort": true, 00:09:16.148 "seek_hole": false, 00:09:16.148 "seek_data": false, 00:09:16.148 "copy": true, 00:09:16.148 "nvme_iov_md": false 00:09:16.148 }, 00:09:16.148 "memory_domains": [ 00:09:16.148 { 00:09:16.148 "dma_device_id": "system", 00:09:16.148 "dma_device_type": 1 00:09:16.148 }, 00:09:16.148 { 00:09:16.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.148 "dma_device_type": 2 00:09:16.148 } 00:09:16.148 ], 00:09:16.148 "driver_specific": {} 00:09:16.148 } 00:09:16.148 ] 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:16.148 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.149 [2024-11-26 17:53:57.959981] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:16.149 [2024-11-26 17:53:57.960156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:16.149 [2024-11-26 17:53:57.960223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:16.149 [2024-11-26 17:53:57.962556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.149 17:53:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.408 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.408 "name": "Existed_Raid", 00:09:16.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.408 "strip_size_kb": 64, 00:09:16.408 "state": "configuring", 00:09:16.408 "raid_level": "concat", 00:09:16.408 "superblock": false, 00:09:16.408 "num_base_bdevs": 3, 00:09:16.408 "num_base_bdevs_discovered": 2, 00:09:16.408 "num_base_bdevs_operational": 3, 00:09:16.408 "base_bdevs_list": [ 00:09:16.408 { 00:09:16.408 "name": "BaseBdev1", 00:09:16.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.408 "is_configured": false, 00:09:16.408 "data_offset": 0, 00:09:16.408 "data_size": 0 00:09:16.408 }, 00:09:16.408 { 00:09:16.408 "name": "BaseBdev2", 00:09:16.408 "uuid": "123ac421-2cbc-4ba1-b459-d84e940815b5", 00:09:16.408 "is_configured": true, 00:09:16.408 "data_offset": 0, 00:09:16.408 "data_size": 65536 00:09:16.408 }, 00:09:16.408 { 00:09:16.408 "name": "BaseBdev3", 00:09:16.408 "uuid": "7f931035-aba9-4ab6-b407-7bf7c0bfbc2f", 00:09:16.408 "is_configured": true, 00:09:16.408 "data_offset": 0, 00:09:16.408 "data_size": 65536 00:09:16.408 } 00:09:16.408 ] 00:09:16.408 }' 00:09:16.408 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.408 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.668 [2024-11-26 17:53:58.387232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.668 "name": "Existed_Raid", 00:09:16.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.668 "strip_size_kb": 64, 00:09:16.668 "state": "configuring", 00:09:16.668 "raid_level": "concat", 00:09:16.668 "superblock": false, 00:09:16.668 "num_base_bdevs": 3, 00:09:16.668 "num_base_bdevs_discovered": 1, 00:09:16.668 "num_base_bdevs_operational": 3, 00:09:16.668 "base_bdevs_list": [ 00:09:16.668 { 00:09:16.668 "name": "BaseBdev1", 00:09:16.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.668 "is_configured": false, 00:09:16.668 "data_offset": 0, 00:09:16.668 "data_size": 0 00:09:16.668 }, 00:09:16.668 { 00:09:16.668 "name": null, 00:09:16.668 "uuid": "123ac421-2cbc-4ba1-b459-d84e940815b5", 00:09:16.668 "is_configured": false, 00:09:16.668 "data_offset": 0, 00:09:16.668 "data_size": 65536 00:09:16.668 }, 00:09:16.668 { 00:09:16.668 "name": "BaseBdev3", 00:09:16.668 "uuid": "7f931035-aba9-4ab6-b407-7bf7c0bfbc2f", 00:09:16.668 "is_configured": true, 00:09:16.668 "data_offset": 0, 00:09:16.668 "data_size": 65536 00:09:16.668 } 00:09:16.668 ] 00:09:16.668 }' 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.668 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.236 [2024-11-26 17:53:58.872439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.236 BaseBdev1 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.236 [ 00:09:17.236 { 00:09:17.236 "name": "BaseBdev1", 00:09:17.236 "aliases": [ 00:09:17.236 "d7828ba9-9b93-4aaa-b3aa-a1184421cdfa" 00:09:17.236 ], 00:09:17.236 "product_name": "Malloc disk", 00:09:17.236 "block_size": 512, 00:09:17.236 "num_blocks": 65536, 00:09:17.236 "uuid": "d7828ba9-9b93-4aaa-b3aa-a1184421cdfa", 00:09:17.236 "assigned_rate_limits": { 00:09:17.236 "rw_ios_per_sec": 0, 00:09:17.236 "rw_mbytes_per_sec": 0, 00:09:17.236 "r_mbytes_per_sec": 0, 00:09:17.236 "w_mbytes_per_sec": 0 00:09:17.236 }, 00:09:17.236 "claimed": true, 00:09:17.236 "claim_type": "exclusive_write", 00:09:17.236 "zoned": false, 00:09:17.236 "supported_io_types": { 00:09:17.236 "read": true, 00:09:17.236 "write": true, 00:09:17.236 "unmap": true, 00:09:17.236 "flush": true, 00:09:17.236 "reset": true, 00:09:17.236 "nvme_admin": false, 00:09:17.236 "nvme_io": false, 00:09:17.236 "nvme_io_md": false, 00:09:17.236 "write_zeroes": true, 00:09:17.236 "zcopy": true, 00:09:17.236 "get_zone_info": false, 00:09:17.236 "zone_management": false, 00:09:17.236 "zone_append": false, 00:09:17.236 "compare": false, 00:09:17.236 "compare_and_write": false, 00:09:17.236 "abort": true, 00:09:17.236 "seek_hole": false, 00:09:17.236 "seek_data": false, 00:09:17.236 "copy": true, 00:09:17.236 "nvme_iov_md": false 00:09:17.236 }, 00:09:17.236 "memory_domains": [ 00:09:17.236 { 00:09:17.236 "dma_device_id": "system", 00:09:17.236 "dma_device_type": 1 00:09:17.236 }, 00:09:17.236 { 00:09:17.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.236 "dma_device_type": 2 00:09:17.236 } 00:09:17.236 ], 00:09:17.236 "driver_specific": {} 00:09:17.236 } 00:09:17.236 ] 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.236 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.237 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.237 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.237 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.237 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.237 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.237 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.237 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.237 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.237 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.237 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.237 "name": "Existed_Raid", 00:09:17.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.237 "strip_size_kb": 64, 00:09:17.237 "state": "configuring", 00:09:17.237 "raid_level": "concat", 00:09:17.237 "superblock": false, 00:09:17.237 "num_base_bdevs": 3, 00:09:17.237 "num_base_bdevs_discovered": 2, 00:09:17.237 "num_base_bdevs_operational": 3, 00:09:17.237 "base_bdevs_list": [ 00:09:17.237 { 00:09:17.237 "name": "BaseBdev1", 00:09:17.237 "uuid": "d7828ba9-9b93-4aaa-b3aa-a1184421cdfa", 00:09:17.237 "is_configured": true, 00:09:17.237 "data_offset": 0, 00:09:17.237 "data_size": 65536 00:09:17.237 }, 00:09:17.237 { 00:09:17.237 "name": null, 00:09:17.237 "uuid": "123ac421-2cbc-4ba1-b459-d84e940815b5", 00:09:17.237 "is_configured": false, 00:09:17.237 "data_offset": 0, 00:09:17.237 "data_size": 65536 00:09:17.237 }, 00:09:17.237 { 00:09:17.237 "name": "BaseBdev3", 00:09:17.237 "uuid": "7f931035-aba9-4ab6-b407-7bf7c0bfbc2f", 00:09:17.237 "is_configured": true, 00:09:17.237 "data_offset": 0, 00:09:17.237 "data_size": 65536 00:09:17.237 } 00:09:17.237 ] 00:09:17.237 }' 00:09:17.237 17:53:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.237 17:53:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.504 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:17.504 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.504 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.505 [2024-11-26 17:53:59.331785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.505 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.506 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.506 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.771 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.771 "name": "Existed_Raid", 00:09:17.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.771 "strip_size_kb": 64, 00:09:17.771 "state": "configuring", 00:09:17.771 "raid_level": "concat", 00:09:17.771 "superblock": false, 00:09:17.771 "num_base_bdevs": 3, 00:09:17.771 "num_base_bdevs_discovered": 1, 00:09:17.771 "num_base_bdevs_operational": 3, 00:09:17.771 "base_bdevs_list": [ 00:09:17.771 { 00:09:17.771 "name": "BaseBdev1", 00:09:17.771 "uuid": "d7828ba9-9b93-4aaa-b3aa-a1184421cdfa", 00:09:17.771 "is_configured": true, 00:09:17.771 "data_offset": 0, 00:09:17.771 "data_size": 65536 00:09:17.771 }, 00:09:17.771 { 00:09:17.771 "name": null, 00:09:17.771 "uuid": "123ac421-2cbc-4ba1-b459-d84e940815b5", 00:09:17.771 "is_configured": false, 00:09:17.771 "data_offset": 0, 00:09:17.771 "data_size": 65536 00:09:17.771 }, 00:09:17.771 { 00:09:17.771 "name": null, 00:09:17.771 "uuid": "7f931035-aba9-4ab6-b407-7bf7c0bfbc2f", 00:09:17.771 "is_configured": false, 00:09:17.771 "data_offset": 0, 00:09:17.771 "data_size": 65536 00:09:17.771 } 00:09:17.771 ] 00:09:17.771 }' 00:09:17.771 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.771 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.031 [2024-11-26 17:53:59.854961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.031 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.290 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.290 "name": "Existed_Raid", 00:09:18.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.290 "strip_size_kb": 64, 00:09:18.290 "state": "configuring", 00:09:18.290 "raid_level": "concat", 00:09:18.290 "superblock": false, 00:09:18.290 "num_base_bdevs": 3, 00:09:18.290 "num_base_bdevs_discovered": 2, 00:09:18.290 "num_base_bdevs_operational": 3, 00:09:18.290 "base_bdevs_list": [ 00:09:18.290 { 00:09:18.290 "name": "BaseBdev1", 00:09:18.290 "uuid": "d7828ba9-9b93-4aaa-b3aa-a1184421cdfa", 00:09:18.290 "is_configured": true, 00:09:18.290 "data_offset": 0, 00:09:18.290 "data_size": 65536 00:09:18.290 }, 00:09:18.290 { 00:09:18.290 "name": null, 00:09:18.290 "uuid": "123ac421-2cbc-4ba1-b459-d84e940815b5", 00:09:18.290 "is_configured": false, 00:09:18.290 "data_offset": 0, 00:09:18.290 "data_size": 65536 00:09:18.290 }, 00:09:18.290 { 00:09:18.290 "name": "BaseBdev3", 00:09:18.290 "uuid": "7f931035-aba9-4ab6-b407-7bf7c0bfbc2f", 00:09:18.290 "is_configured": true, 00:09:18.290 "data_offset": 0, 00:09:18.290 "data_size": 65536 00:09:18.290 } 00:09:18.290 ] 00:09:18.290 }' 00:09:18.290 17:53:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.290 17:53:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.548 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.548 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.548 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.548 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:18.548 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.548 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:18.548 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:18.548 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.548 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.548 [2024-11-26 17:54:00.362139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.806 "name": "Existed_Raid", 00:09:18.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.806 "strip_size_kb": 64, 00:09:18.806 "state": "configuring", 00:09:18.806 "raid_level": "concat", 00:09:18.806 "superblock": false, 00:09:18.806 "num_base_bdevs": 3, 00:09:18.806 "num_base_bdevs_discovered": 1, 00:09:18.806 "num_base_bdevs_operational": 3, 00:09:18.806 "base_bdevs_list": [ 00:09:18.806 { 00:09:18.806 "name": null, 00:09:18.806 "uuid": "d7828ba9-9b93-4aaa-b3aa-a1184421cdfa", 00:09:18.806 "is_configured": false, 00:09:18.806 "data_offset": 0, 00:09:18.806 "data_size": 65536 00:09:18.806 }, 00:09:18.806 { 00:09:18.806 "name": null, 00:09:18.806 "uuid": "123ac421-2cbc-4ba1-b459-d84e940815b5", 00:09:18.806 "is_configured": false, 00:09:18.806 "data_offset": 0, 00:09:18.806 "data_size": 65536 00:09:18.806 }, 00:09:18.806 { 00:09:18.806 "name": "BaseBdev3", 00:09:18.806 "uuid": "7f931035-aba9-4ab6-b407-7bf7c0bfbc2f", 00:09:18.806 "is_configured": true, 00:09:18.806 "data_offset": 0, 00:09:18.806 "data_size": 65536 00:09:18.806 } 00:09:18.806 ] 00:09:18.806 }' 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.806 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.066 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:19.066 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.066 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.066 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.066 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.066 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:19.066 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:19.066 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.067 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.327 [2024-11-26 17:54:00.931556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.327 "name": "Existed_Raid", 00:09:19.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.327 "strip_size_kb": 64, 00:09:19.327 "state": "configuring", 00:09:19.327 "raid_level": "concat", 00:09:19.327 "superblock": false, 00:09:19.327 "num_base_bdevs": 3, 00:09:19.327 "num_base_bdevs_discovered": 2, 00:09:19.327 "num_base_bdevs_operational": 3, 00:09:19.327 "base_bdevs_list": [ 00:09:19.327 { 00:09:19.327 "name": null, 00:09:19.327 "uuid": "d7828ba9-9b93-4aaa-b3aa-a1184421cdfa", 00:09:19.327 "is_configured": false, 00:09:19.327 "data_offset": 0, 00:09:19.327 "data_size": 65536 00:09:19.327 }, 00:09:19.327 { 00:09:19.327 "name": "BaseBdev2", 00:09:19.327 "uuid": "123ac421-2cbc-4ba1-b459-d84e940815b5", 00:09:19.327 "is_configured": true, 00:09:19.327 "data_offset": 0, 00:09:19.327 "data_size": 65536 00:09:19.327 }, 00:09:19.327 { 00:09:19.327 "name": "BaseBdev3", 00:09:19.327 "uuid": "7f931035-aba9-4ab6-b407-7bf7c0bfbc2f", 00:09:19.327 "is_configured": true, 00:09:19.327 "data_offset": 0, 00:09:19.327 "data_size": 65536 00:09:19.327 } 00:09:19.327 ] 00:09:19.327 }' 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.327 17:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.586 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:19.586 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.586 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.586 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.586 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.586 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:19.586 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.586 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:19.586 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.586 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.586 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d7828ba9-9b93-4aaa-b3aa-a1184421cdfa 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.846 [2024-11-26 17:54:01.510862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:19.846 [2024-11-26 17:54:01.511077] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:19.846 [2024-11-26 17:54:01.511097] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:19.846 [2024-11-26 17:54:01.511413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:19.846 [2024-11-26 17:54:01.511606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:19.846 [2024-11-26 17:54:01.511617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:19.846 [2024-11-26 17:54:01.511989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.846 NewBaseBdev 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.846 [ 00:09:19.846 { 00:09:19.846 "name": "NewBaseBdev", 00:09:19.846 "aliases": [ 00:09:19.846 "d7828ba9-9b93-4aaa-b3aa-a1184421cdfa" 00:09:19.846 ], 00:09:19.846 "product_name": "Malloc disk", 00:09:19.846 "block_size": 512, 00:09:19.846 "num_blocks": 65536, 00:09:19.846 "uuid": "d7828ba9-9b93-4aaa-b3aa-a1184421cdfa", 00:09:19.846 "assigned_rate_limits": { 00:09:19.846 "rw_ios_per_sec": 0, 00:09:19.846 "rw_mbytes_per_sec": 0, 00:09:19.846 "r_mbytes_per_sec": 0, 00:09:19.846 "w_mbytes_per_sec": 0 00:09:19.846 }, 00:09:19.846 "claimed": true, 00:09:19.846 "claim_type": "exclusive_write", 00:09:19.846 "zoned": false, 00:09:19.846 "supported_io_types": { 00:09:19.846 "read": true, 00:09:19.846 "write": true, 00:09:19.846 "unmap": true, 00:09:19.846 "flush": true, 00:09:19.846 "reset": true, 00:09:19.846 "nvme_admin": false, 00:09:19.846 "nvme_io": false, 00:09:19.846 "nvme_io_md": false, 00:09:19.846 "write_zeroes": true, 00:09:19.846 "zcopy": true, 00:09:19.846 "get_zone_info": false, 00:09:19.846 "zone_management": false, 00:09:19.846 "zone_append": false, 00:09:19.846 "compare": false, 00:09:19.846 "compare_and_write": false, 00:09:19.846 "abort": true, 00:09:19.846 "seek_hole": false, 00:09:19.846 "seek_data": false, 00:09:19.846 "copy": true, 00:09:19.846 "nvme_iov_md": false 00:09:19.846 }, 00:09:19.846 "memory_domains": [ 00:09:19.846 { 00:09:19.846 "dma_device_id": "system", 00:09:19.846 "dma_device_type": 1 00:09:19.846 }, 00:09:19.846 { 00:09:19.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.846 "dma_device_type": 2 00:09:19.846 } 00:09:19.846 ], 00:09:19.846 "driver_specific": {} 00:09:19.846 } 00:09:19.846 ] 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.846 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.846 "name": "Existed_Raid", 00:09:19.846 "uuid": "fc4025e4-a32b-482f-87b0-d7fd3e6a8dd2", 00:09:19.846 "strip_size_kb": 64, 00:09:19.846 "state": "online", 00:09:19.846 "raid_level": "concat", 00:09:19.846 "superblock": false, 00:09:19.846 "num_base_bdevs": 3, 00:09:19.846 "num_base_bdevs_discovered": 3, 00:09:19.846 "num_base_bdevs_operational": 3, 00:09:19.846 "base_bdevs_list": [ 00:09:19.846 { 00:09:19.846 "name": "NewBaseBdev", 00:09:19.846 "uuid": "d7828ba9-9b93-4aaa-b3aa-a1184421cdfa", 00:09:19.846 "is_configured": true, 00:09:19.846 "data_offset": 0, 00:09:19.846 "data_size": 65536 00:09:19.846 }, 00:09:19.846 { 00:09:19.846 "name": "BaseBdev2", 00:09:19.846 "uuid": "123ac421-2cbc-4ba1-b459-d84e940815b5", 00:09:19.846 "is_configured": true, 00:09:19.846 "data_offset": 0, 00:09:19.846 "data_size": 65536 00:09:19.846 }, 00:09:19.846 { 00:09:19.846 "name": "BaseBdev3", 00:09:19.846 "uuid": "7f931035-aba9-4ab6-b407-7bf7c0bfbc2f", 00:09:19.846 "is_configured": true, 00:09:19.846 "data_offset": 0, 00:09:19.846 "data_size": 65536 00:09:19.846 } 00:09:19.846 ] 00:09:19.846 }' 00:09:19.847 17:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.847 17:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.415 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:20.415 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:20.415 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.415 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.416 [2024-11-26 17:54:02.046430] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.416 "name": "Existed_Raid", 00:09:20.416 "aliases": [ 00:09:20.416 "fc4025e4-a32b-482f-87b0-d7fd3e6a8dd2" 00:09:20.416 ], 00:09:20.416 "product_name": "Raid Volume", 00:09:20.416 "block_size": 512, 00:09:20.416 "num_blocks": 196608, 00:09:20.416 "uuid": "fc4025e4-a32b-482f-87b0-d7fd3e6a8dd2", 00:09:20.416 "assigned_rate_limits": { 00:09:20.416 "rw_ios_per_sec": 0, 00:09:20.416 "rw_mbytes_per_sec": 0, 00:09:20.416 "r_mbytes_per_sec": 0, 00:09:20.416 "w_mbytes_per_sec": 0 00:09:20.416 }, 00:09:20.416 "claimed": false, 00:09:20.416 "zoned": false, 00:09:20.416 "supported_io_types": { 00:09:20.416 "read": true, 00:09:20.416 "write": true, 00:09:20.416 "unmap": true, 00:09:20.416 "flush": true, 00:09:20.416 "reset": true, 00:09:20.416 "nvme_admin": false, 00:09:20.416 "nvme_io": false, 00:09:20.416 "nvme_io_md": false, 00:09:20.416 "write_zeroes": true, 00:09:20.416 "zcopy": false, 00:09:20.416 "get_zone_info": false, 00:09:20.416 "zone_management": false, 00:09:20.416 "zone_append": false, 00:09:20.416 "compare": false, 00:09:20.416 "compare_and_write": false, 00:09:20.416 "abort": false, 00:09:20.416 "seek_hole": false, 00:09:20.416 "seek_data": false, 00:09:20.416 "copy": false, 00:09:20.416 "nvme_iov_md": false 00:09:20.416 }, 00:09:20.416 "memory_domains": [ 00:09:20.416 { 00:09:20.416 "dma_device_id": "system", 00:09:20.416 "dma_device_type": 1 00:09:20.416 }, 00:09:20.416 { 00:09:20.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.416 "dma_device_type": 2 00:09:20.416 }, 00:09:20.416 { 00:09:20.416 "dma_device_id": "system", 00:09:20.416 "dma_device_type": 1 00:09:20.416 }, 00:09:20.416 { 00:09:20.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.416 "dma_device_type": 2 00:09:20.416 }, 00:09:20.416 { 00:09:20.416 "dma_device_id": "system", 00:09:20.416 "dma_device_type": 1 00:09:20.416 }, 00:09:20.416 { 00:09:20.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.416 "dma_device_type": 2 00:09:20.416 } 00:09:20.416 ], 00:09:20.416 "driver_specific": { 00:09:20.416 "raid": { 00:09:20.416 "uuid": "fc4025e4-a32b-482f-87b0-d7fd3e6a8dd2", 00:09:20.416 "strip_size_kb": 64, 00:09:20.416 "state": "online", 00:09:20.416 "raid_level": "concat", 00:09:20.416 "superblock": false, 00:09:20.416 "num_base_bdevs": 3, 00:09:20.416 "num_base_bdevs_discovered": 3, 00:09:20.416 "num_base_bdevs_operational": 3, 00:09:20.416 "base_bdevs_list": [ 00:09:20.416 { 00:09:20.416 "name": "NewBaseBdev", 00:09:20.416 "uuid": "d7828ba9-9b93-4aaa-b3aa-a1184421cdfa", 00:09:20.416 "is_configured": true, 00:09:20.416 "data_offset": 0, 00:09:20.416 "data_size": 65536 00:09:20.416 }, 00:09:20.416 { 00:09:20.416 "name": "BaseBdev2", 00:09:20.416 "uuid": "123ac421-2cbc-4ba1-b459-d84e940815b5", 00:09:20.416 "is_configured": true, 00:09:20.416 "data_offset": 0, 00:09:20.416 "data_size": 65536 00:09:20.416 }, 00:09:20.416 { 00:09:20.416 "name": "BaseBdev3", 00:09:20.416 "uuid": "7f931035-aba9-4ab6-b407-7bf7c0bfbc2f", 00:09:20.416 "is_configured": true, 00:09:20.416 "data_offset": 0, 00:09:20.416 "data_size": 65536 00:09:20.416 } 00:09:20.416 ] 00:09:20.416 } 00:09:20.416 } 00:09:20.416 }' 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:20.416 BaseBdev2 00:09:20.416 BaseBdev3' 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.416 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.676 [2024-11-26 17:54:02.309622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.676 [2024-11-26 17:54:02.309662] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.676 [2024-11-26 17:54:02.309773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.676 [2024-11-26 17:54:02.309839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.676 [2024-11-26 17:54:02.309854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65808 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65808 ']' 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65808 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65808 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.676 killing process with pid 65808 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65808' 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65808 00:09:20.676 [2024-11-26 17:54:02.361115] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.676 17:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65808 00:09:20.936 [2024-11-26 17:54:02.715010] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.317 17:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:22.317 00:09:22.317 real 0m10.849s 00:09:22.317 user 0m17.039s 00:09:22.317 sys 0m1.888s 00:09:22.317 ************************************ 00:09:22.317 END TEST raid_state_function_test 00:09:22.317 ************************************ 00:09:22.317 17:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.317 17:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.317 17:54:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:22.317 17:54:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:22.317 17:54:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.317 17:54:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:22.317 ************************************ 00:09:22.317 START TEST raid_state_function_test_sb 00:09:22.317 ************************************ 00:09:22.317 17:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:22.317 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:22.317 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:22.317 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:22.317 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:22.317 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:22.317 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.317 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:22.317 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:22.318 Process raid pid: 66435 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66435 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66435' 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66435 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66435 ']' 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.318 17:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.578 [2024-11-26 17:54:04.198056] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:09:22.578 [2024-11-26 17:54:04.198312] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.578 [2024-11-26 17:54:04.362599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.838 [2024-11-26 17:54:04.507502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.098 [2024-11-26 17:54:04.757065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.098 [2024-11-26 17:54:04.757125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.359 [2024-11-26 17:54:05.161158] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.359 [2024-11-26 17:54:05.161229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.359 [2024-11-26 17:54:05.161242] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.359 [2024-11-26 17:54:05.161254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.359 [2024-11-26 17:54:05.161262] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.359 [2024-11-26 17:54:05.161272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.359 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.359 "name": "Existed_Raid", 00:09:23.359 "uuid": "5faf2e22-51bc-4857-a0e1-ce2db6e2b334", 00:09:23.359 "strip_size_kb": 64, 00:09:23.359 "state": "configuring", 00:09:23.359 "raid_level": "concat", 00:09:23.359 "superblock": true, 00:09:23.359 "num_base_bdevs": 3, 00:09:23.359 "num_base_bdevs_discovered": 0, 00:09:23.359 "num_base_bdevs_operational": 3, 00:09:23.359 "base_bdevs_list": [ 00:09:23.359 { 00:09:23.359 "name": "BaseBdev1", 00:09:23.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.359 "is_configured": false, 00:09:23.359 "data_offset": 0, 00:09:23.359 "data_size": 0 00:09:23.359 }, 00:09:23.359 { 00:09:23.359 "name": "BaseBdev2", 00:09:23.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.359 "is_configured": false, 00:09:23.359 "data_offset": 0, 00:09:23.359 "data_size": 0 00:09:23.359 }, 00:09:23.359 { 00:09:23.359 "name": "BaseBdev3", 00:09:23.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.359 "is_configured": false, 00:09:23.359 "data_offset": 0, 00:09:23.359 "data_size": 0 00:09:23.359 } 00:09:23.359 ] 00:09:23.359 }' 00:09:23.619 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.619 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.879 [2024-11-26 17:54:05.601072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.879 [2024-11-26 17:54:05.601191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.879 [2024-11-26 17:54:05.613114] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.879 [2024-11-26 17:54:05.613250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.879 [2024-11-26 17:54:05.613323] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.879 [2024-11-26 17:54:05.613374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.879 [2024-11-26 17:54:05.613412] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.879 [2024-11-26 17:54:05.613457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.879 [2024-11-26 17:54:05.667147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.879 BaseBdev1 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.879 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.879 [ 00:09:23.879 { 00:09:23.879 "name": "BaseBdev1", 00:09:23.879 "aliases": [ 00:09:23.879 "e8001b7c-46d5-43eb-bcb7-7dc61b31fa8b" 00:09:23.879 ], 00:09:23.879 "product_name": "Malloc disk", 00:09:23.879 "block_size": 512, 00:09:23.879 "num_blocks": 65536, 00:09:23.879 "uuid": "e8001b7c-46d5-43eb-bcb7-7dc61b31fa8b", 00:09:23.879 "assigned_rate_limits": { 00:09:23.879 "rw_ios_per_sec": 0, 00:09:23.879 "rw_mbytes_per_sec": 0, 00:09:23.879 "r_mbytes_per_sec": 0, 00:09:23.879 "w_mbytes_per_sec": 0 00:09:23.879 }, 00:09:23.879 "claimed": true, 00:09:23.879 "claim_type": "exclusive_write", 00:09:23.879 "zoned": false, 00:09:23.879 "supported_io_types": { 00:09:23.879 "read": true, 00:09:23.879 "write": true, 00:09:23.879 "unmap": true, 00:09:23.879 "flush": true, 00:09:23.879 "reset": true, 00:09:23.879 "nvme_admin": false, 00:09:23.879 "nvme_io": false, 00:09:23.879 "nvme_io_md": false, 00:09:23.879 "write_zeroes": true, 00:09:23.879 "zcopy": true, 00:09:23.879 "get_zone_info": false, 00:09:23.879 "zone_management": false, 00:09:23.879 "zone_append": false, 00:09:23.879 "compare": false, 00:09:23.879 "compare_and_write": false, 00:09:23.879 "abort": true, 00:09:23.879 "seek_hole": false, 00:09:23.879 "seek_data": false, 00:09:23.879 "copy": true, 00:09:23.879 "nvme_iov_md": false 00:09:23.879 }, 00:09:23.879 "memory_domains": [ 00:09:23.879 { 00:09:23.879 "dma_device_id": "system", 00:09:23.879 "dma_device_type": 1 00:09:23.879 }, 00:09:23.879 { 00:09:23.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.879 "dma_device_type": 2 00:09:23.879 } 00:09:23.879 ], 00:09:23.879 "driver_specific": {} 00:09:23.879 } 00:09:23.880 ] 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.880 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.139 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.139 "name": "Existed_Raid", 00:09:24.139 "uuid": "83889370-7b76-4fb7-a968-fe02d673818c", 00:09:24.139 "strip_size_kb": 64, 00:09:24.139 "state": "configuring", 00:09:24.139 "raid_level": "concat", 00:09:24.139 "superblock": true, 00:09:24.139 "num_base_bdevs": 3, 00:09:24.139 "num_base_bdevs_discovered": 1, 00:09:24.139 "num_base_bdevs_operational": 3, 00:09:24.139 "base_bdevs_list": [ 00:09:24.139 { 00:09:24.139 "name": "BaseBdev1", 00:09:24.139 "uuid": "e8001b7c-46d5-43eb-bcb7-7dc61b31fa8b", 00:09:24.139 "is_configured": true, 00:09:24.139 "data_offset": 2048, 00:09:24.139 "data_size": 63488 00:09:24.139 }, 00:09:24.139 { 00:09:24.139 "name": "BaseBdev2", 00:09:24.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.139 "is_configured": false, 00:09:24.139 "data_offset": 0, 00:09:24.139 "data_size": 0 00:09:24.139 }, 00:09:24.139 { 00:09:24.139 "name": "BaseBdev3", 00:09:24.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.139 "is_configured": false, 00:09:24.139 "data_offset": 0, 00:09:24.139 "data_size": 0 00:09:24.139 } 00:09:24.139 ] 00:09:24.139 }' 00:09:24.139 17:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.139 17:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.398 [2024-11-26 17:54:06.206464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.398 [2024-11-26 17:54:06.206628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.398 [2024-11-26 17:54:06.214549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.398 [2024-11-26 17:54:06.216876] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.398 [2024-11-26 17:54:06.216963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.398 [2024-11-26 17:54:06.216976] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.398 [2024-11-26 17:54:06.216988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.398 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.657 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.657 "name": "Existed_Raid", 00:09:24.657 "uuid": "9b3d5e5d-dfdf-4db6-b2a2-37e5eee324f8", 00:09:24.657 "strip_size_kb": 64, 00:09:24.657 "state": "configuring", 00:09:24.657 "raid_level": "concat", 00:09:24.657 "superblock": true, 00:09:24.657 "num_base_bdevs": 3, 00:09:24.657 "num_base_bdevs_discovered": 1, 00:09:24.657 "num_base_bdevs_operational": 3, 00:09:24.657 "base_bdevs_list": [ 00:09:24.657 { 00:09:24.657 "name": "BaseBdev1", 00:09:24.657 "uuid": "e8001b7c-46d5-43eb-bcb7-7dc61b31fa8b", 00:09:24.657 "is_configured": true, 00:09:24.657 "data_offset": 2048, 00:09:24.657 "data_size": 63488 00:09:24.657 }, 00:09:24.657 { 00:09:24.657 "name": "BaseBdev2", 00:09:24.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.657 "is_configured": false, 00:09:24.657 "data_offset": 0, 00:09:24.657 "data_size": 0 00:09:24.657 }, 00:09:24.657 { 00:09:24.657 "name": "BaseBdev3", 00:09:24.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.657 "is_configured": false, 00:09:24.657 "data_offset": 0, 00:09:24.657 "data_size": 0 00:09:24.657 } 00:09:24.657 ] 00:09:24.657 }' 00:09:24.657 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.657 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.917 [2024-11-26 17:54:06.705721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.917 BaseBdev2 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.917 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.917 [ 00:09:24.917 { 00:09:24.917 "name": "BaseBdev2", 00:09:24.917 "aliases": [ 00:09:24.917 "89d31763-0c38-4da3-9151-8968e0f6fbc6" 00:09:24.917 ], 00:09:24.917 "product_name": "Malloc disk", 00:09:24.917 "block_size": 512, 00:09:24.917 "num_blocks": 65536, 00:09:24.917 "uuid": "89d31763-0c38-4da3-9151-8968e0f6fbc6", 00:09:24.917 "assigned_rate_limits": { 00:09:24.917 "rw_ios_per_sec": 0, 00:09:24.917 "rw_mbytes_per_sec": 0, 00:09:24.917 "r_mbytes_per_sec": 0, 00:09:24.917 "w_mbytes_per_sec": 0 00:09:24.917 }, 00:09:24.917 "claimed": true, 00:09:24.917 "claim_type": "exclusive_write", 00:09:24.917 "zoned": false, 00:09:24.918 "supported_io_types": { 00:09:24.918 "read": true, 00:09:24.918 "write": true, 00:09:24.918 "unmap": true, 00:09:24.918 "flush": true, 00:09:24.918 "reset": true, 00:09:24.918 "nvme_admin": false, 00:09:24.918 "nvme_io": false, 00:09:24.918 "nvme_io_md": false, 00:09:24.918 "write_zeroes": true, 00:09:24.918 "zcopy": true, 00:09:24.918 "get_zone_info": false, 00:09:24.918 "zone_management": false, 00:09:24.918 "zone_append": false, 00:09:24.918 "compare": false, 00:09:24.918 "compare_and_write": false, 00:09:24.918 "abort": true, 00:09:24.918 "seek_hole": false, 00:09:24.918 "seek_data": false, 00:09:24.918 "copy": true, 00:09:24.918 "nvme_iov_md": false 00:09:24.918 }, 00:09:24.918 "memory_domains": [ 00:09:24.918 { 00:09:24.918 "dma_device_id": "system", 00:09:24.918 "dma_device_type": 1 00:09:24.918 }, 00:09:24.918 { 00:09:24.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.918 "dma_device_type": 2 00:09:24.918 } 00:09:24.918 ], 00:09:24.918 "driver_specific": {} 00:09:24.918 } 00:09:24.918 ] 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.918 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.177 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.177 "name": "Existed_Raid", 00:09:25.177 "uuid": "9b3d5e5d-dfdf-4db6-b2a2-37e5eee324f8", 00:09:25.177 "strip_size_kb": 64, 00:09:25.177 "state": "configuring", 00:09:25.177 "raid_level": "concat", 00:09:25.177 "superblock": true, 00:09:25.177 "num_base_bdevs": 3, 00:09:25.177 "num_base_bdevs_discovered": 2, 00:09:25.177 "num_base_bdevs_operational": 3, 00:09:25.177 "base_bdevs_list": [ 00:09:25.177 { 00:09:25.177 "name": "BaseBdev1", 00:09:25.177 "uuid": "e8001b7c-46d5-43eb-bcb7-7dc61b31fa8b", 00:09:25.177 "is_configured": true, 00:09:25.177 "data_offset": 2048, 00:09:25.177 "data_size": 63488 00:09:25.177 }, 00:09:25.177 { 00:09:25.177 "name": "BaseBdev2", 00:09:25.177 "uuid": "89d31763-0c38-4da3-9151-8968e0f6fbc6", 00:09:25.177 "is_configured": true, 00:09:25.177 "data_offset": 2048, 00:09:25.177 "data_size": 63488 00:09:25.177 }, 00:09:25.177 { 00:09:25.177 "name": "BaseBdev3", 00:09:25.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.177 "is_configured": false, 00:09:25.177 "data_offset": 0, 00:09:25.177 "data_size": 0 00:09:25.177 } 00:09:25.177 ] 00:09:25.177 }' 00:09:25.177 17:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.177 17:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.436 [2024-11-26 17:54:07.248186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.436 [2024-11-26 17:54:07.248614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:25.436 [2024-11-26 17:54:07.248694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:25.436 [2024-11-26 17:54:07.249107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:25.436 BaseBdev3 00:09:25.436 [2024-11-26 17:54:07.249355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:25.436 [2024-11-26 17:54:07.249370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:25.436 [2024-11-26 17:54:07.249566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.436 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.436 [ 00:09:25.436 { 00:09:25.436 "name": "BaseBdev3", 00:09:25.436 "aliases": [ 00:09:25.436 "a31c9ca0-eca8-4bad-a9d1-07d64dd13796" 00:09:25.436 ], 00:09:25.436 "product_name": "Malloc disk", 00:09:25.436 "block_size": 512, 00:09:25.436 "num_blocks": 65536, 00:09:25.436 "uuid": "a31c9ca0-eca8-4bad-a9d1-07d64dd13796", 00:09:25.436 "assigned_rate_limits": { 00:09:25.436 "rw_ios_per_sec": 0, 00:09:25.436 "rw_mbytes_per_sec": 0, 00:09:25.436 "r_mbytes_per_sec": 0, 00:09:25.436 "w_mbytes_per_sec": 0 00:09:25.436 }, 00:09:25.436 "claimed": true, 00:09:25.436 "claim_type": "exclusive_write", 00:09:25.436 "zoned": false, 00:09:25.436 "supported_io_types": { 00:09:25.436 "read": true, 00:09:25.436 "write": true, 00:09:25.436 "unmap": true, 00:09:25.436 "flush": true, 00:09:25.436 "reset": true, 00:09:25.436 "nvme_admin": false, 00:09:25.436 "nvme_io": false, 00:09:25.436 "nvme_io_md": false, 00:09:25.436 "write_zeroes": true, 00:09:25.436 "zcopy": true, 00:09:25.436 "get_zone_info": false, 00:09:25.436 "zone_management": false, 00:09:25.436 "zone_append": false, 00:09:25.436 "compare": false, 00:09:25.436 "compare_and_write": false, 00:09:25.436 "abort": true, 00:09:25.437 "seek_hole": false, 00:09:25.437 "seek_data": false, 00:09:25.437 "copy": true, 00:09:25.437 "nvme_iov_md": false 00:09:25.437 }, 00:09:25.437 "memory_domains": [ 00:09:25.437 { 00:09:25.437 "dma_device_id": "system", 00:09:25.437 "dma_device_type": 1 00:09:25.437 }, 00:09:25.437 { 00:09:25.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.437 "dma_device_type": 2 00:09:25.437 } 00:09:25.437 ], 00:09:25.437 "driver_specific": {} 00:09:25.437 } 00:09:25.437 ] 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.437 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.696 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.696 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.696 "name": "Existed_Raid", 00:09:25.696 "uuid": "9b3d5e5d-dfdf-4db6-b2a2-37e5eee324f8", 00:09:25.696 "strip_size_kb": 64, 00:09:25.696 "state": "online", 00:09:25.696 "raid_level": "concat", 00:09:25.696 "superblock": true, 00:09:25.696 "num_base_bdevs": 3, 00:09:25.696 "num_base_bdevs_discovered": 3, 00:09:25.696 "num_base_bdevs_operational": 3, 00:09:25.696 "base_bdevs_list": [ 00:09:25.696 { 00:09:25.696 "name": "BaseBdev1", 00:09:25.696 "uuid": "e8001b7c-46d5-43eb-bcb7-7dc61b31fa8b", 00:09:25.696 "is_configured": true, 00:09:25.696 "data_offset": 2048, 00:09:25.696 "data_size": 63488 00:09:25.696 }, 00:09:25.696 { 00:09:25.696 "name": "BaseBdev2", 00:09:25.696 "uuid": "89d31763-0c38-4da3-9151-8968e0f6fbc6", 00:09:25.696 "is_configured": true, 00:09:25.696 "data_offset": 2048, 00:09:25.696 "data_size": 63488 00:09:25.696 }, 00:09:25.696 { 00:09:25.696 "name": "BaseBdev3", 00:09:25.696 "uuid": "a31c9ca0-eca8-4bad-a9d1-07d64dd13796", 00:09:25.696 "is_configured": true, 00:09:25.696 "data_offset": 2048, 00:09:25.696 "data_size": 63488 00:09:25.696 } 00:09:25.696 ] 00:09:25.696 }' 00:09:25.696 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.696 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.954 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:25.954 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:25.954 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.954 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.954 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.954 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.954 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:25.954 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.954 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.954 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.954 [2024-11-26 17:54:07.711857] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.954 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.954 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.954 "name": "Existed_Raid", 00:09:25.954 "aliases": [ 00:09:25.954 "9b3d5e5d-dfdf-4db6-b2a2-37e5eee324f8" 00:09:25.954 ], 00:09:25.954 "product_name": "Raid Volume", 00:09:25.954 "block_size": 512, 00:09:25.954 "num_blocks": 190464, 00:09:25.954 "uuid": "9b3d5e5d-dfdf-4db6-b2a2-37e5eee324f8", 00:09:25.954 "assigned_rate_limits": { 00:09:25.954 "rw_ios_per_sec": 0, 00:09:25.954 "rw_mbytes_per_sec": 0, 00:09:25.954 "r_mbytes_per_sec": 0, 00:09:25.954 "w_mbytes_per_sec": 0 00:09:25.954 }, 00:09:25.954 "claimed": false, 00:09:25.954 "zoned": false, 00:09:25.954 "supported_io_types": { 00:09:25.954 "read": true, 00:09:25.954 "write": true, 00:09:25.954 "unmap": true, 00:09:25.954 "flush": true, 00:09:25.954 "reset": true, 00:09:25.954 "nvme_admin": false, 00:09:25.954 "nvme_io": false, 00:09:25.954 "nvme_io_md": false, 00:09:25.954 "write_zeroes": true, 00:09:25.954 "zcopy": false, 00:09:25.954 "get_zone_info": false, 00:09:25.954 "zone_management": false, 00:09:25.954 "zone_append": false, 00:09:25.954 "compare": false, 00:09:25.954 "compare_and_write": false, 00:09:25.954 "abort": false, 00:09:25.954 "seek_hole": false, 00:09:25.954 "seek_data": false, 00:09:25.954 "copy": false, 00:09:25.954 "nvme_iov_md": false 00:09:25.954 }, 00:09:25.954 "memory_domains": [ 00:09:25.954 { 00:09:25.954 "dma_device_id": "system", 00:09:25.954 "dma_device_type": 1 00:09:25.954 }, 00:09:25.954 { 00:09:25.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.954 "dma_device_type": 2 00:09:25.954 }, 00:09:25.954 { 00:09:25.954 "dma_device_id": "system", 00:09:25.954 "dma_device_type": 1 00:09:25.954 }, 00:09:25.954 { 00:09:25.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.954 "dma_device_type": 2 00:09:25.954 }, 00:09:25.954 { 00:09:25.954 "dma_device_id": "system", 00:09:25.954 "dma_device_type": 1 00:09:25.954 }, 00:09:25.954 { 00:09:25.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.954 "dma_device_type": 2 00:09:25.954 } 00:09:25.954 ], 00:09:25.955 "driver_specific": { 00:09:25.955 "raid": { 00:09:25.955 "uuid": "9b3d5e5d-dfdf-4db6-b2a2-37e5eee324f8", 00:09:25.955 "strip_size_kb": 64, 00:09:25.955 "state": "online", 00:09:25.955 "raid_level": "concat", 00:09:25.955 "superblock": true, 00:09:25.955 "num_base_bdevs": 3, 00:09:25.955 "num_base_bdevs_discovered": 3, 00:09:25.955 "num_base_bdevs_operational": 3, 00:09:25.955 "base_bdevs_list": [ 00:09:25.955 { 00:09:25.955 "name": "BaseBdev1", 00:09:25.955 "uuid": "e8001b7c-46d5-43eb-bcb7-7dc61b31fa8b", 00:09:25.955 "is_configured": true, 00:09:25.955 "data_offset": 2048, 00:09:25.955 "data_size": 63488 00:09:25.955 }, 00:09:25.955 { 00:09:25.955 "name": "BaseBdev2", 00:09:25.955 "uuid": "89d31763-0c38-4da3-9151-8968e0f6fbc6", 00:09:25.955 "is_configured": true, 00:09:25.955 "data_offset": 2048, 00:09:25.955 "data_size": 63488 00:09:25.955 }, 00:09:25.955 { 00:09:25.955 "name": "BaseBdev3", 00:09:25.955 "uuid": "a31c9ca0-eca8-4bad-a9d1-07d64dd13796", 00:09:25.955 "is_configured": true, 00:09:25.955 "data_offset": 2048, 00:09:25.955 "data_size": 63488 00:09:25.955 } 00:09:25.955 ] 00:09:25.955 } 00:09:25.955 } 00:09:25.955 }' 00:09:25.955 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.955 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:25.955 BaseBdev2 00:09:25.955 BaseBdev3' 00:09:25.955 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.213 17:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.213 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.213 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.213 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:26.213 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.213 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.214 [2024-11-26 17:54:08.015107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:26.214 [2024-11-26 17:54:08.015153] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.214 [2024-11-26 17:54:08.015220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.474 "name": "Existed_Raid", 00:09:26.474 "uuid": "9b3d5e5d-dfdf-4db6-b2a2-37e5eee324f8", 00:09:26.474 "strip_size_kb": 64, 00:09:26.474 "state": "offline", 00:09:26.474 "raid_level": "concat", 00:09:26.474 "superblock": true, 00:09:26.474 "num_base_bdevs": 3, 00:09:26.474 "num_base_bdevs_discovered": 2, 00:09:26.474 "num_base_bdevs_operational": 2, 00:09:26.474 "base_bdevs_list": [ 00:09:26.474 { 00:09:26.474 "name": null, 00:09:26.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.474 "is_configured": false, 00:09:26.474 "data_offset": 0, 00:09:26.474 "data_size": 63488 00:09:26.474 }, 00:09:26.474 { 00:09:26.474 "name": "BaseBdev2", 00:09:26.474 "uuid": "89d31763-0c38-4da3-9151-8968e0f6fbc6", 00:09:26.474 "is_configured": true, 00:09:26.474 "data_offset": 2048, 00:09:26.474 "data_size": 63488 00:09:26.474 }, 00:09:26.474 { 00:09:26.474 "name": "BaseBdev3", 00:09:26.474 "uuid": "a31c9ca0-eca8-4bad-a9d1-07d64dd13796", 00:09:26.474 "is_configured": true, 00:09:26.474 "data_offset": 2048, 00:09:26.474 "data_size": 63488 00:09:26.474 } 00:09:26.474 ] 00:09:26.474 }' 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.474 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.044 [2024-11-26 17:54:08.667987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.044 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.044 [2024-11-26 17:54:08.844974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:27.044 [2024-11-26 17:54:08.845148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:27.304 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.304 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:27.304 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:27.304 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.304 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.304 17:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:27.304 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.304 17:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.304 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:27.304 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:27.304 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:27.304 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:27.304 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:27.304 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:27.304 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.304 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.304 BaseBdev2 00:09:27.304 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.304 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:27.304 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:27.304 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.305 [ 00:09:27.305 { 00:09:27.305 "name": "BaseBdev2", 00:09:27.305 "aliases": [ 00:09:27.305 "41fe705e-b627-4e7f-b8d7-48613eddd321" 00:09:27.305 ], 00:09:27.305 "product_name": "Malloc disk", 00:09:27.305 "block_size": 512, 00:09:27.305 "num_blocks": 65536, 00:09:27.305 "uuid": "41fe705e-b627-4e7f-b8d7-48613eddd321", 00:09:27.305 "assigned_rate_limits": { 00:09:27.305 "rw_ios_per_sec": 0, 00:09:27.305 "rw_mbytes_per_sec": 0, 00:09:27.305 "r_mbytes_per_sec": 0, 00:09:27.305 "w_mbytes_per_sec": 0 00:09:27.305 }, 00:09:27.305 "claimed": false, 00:09:27.305 "zoned": false, 00:09:27.305 "supported_io_types": { 00:09:27.305 "read": true, 00:09:27.305 "write": true, 00:09:27.305 "unmap": true, 00:09:27.305 "flush": true, 00:09:27.305 "reset": true, 00:09:27.305 "nvme_admin": false, 00:09:27.305 "nvme_io": false, 00:09:27.305 "nvme_io_md": false, 00:09:27.305 "write_zeroes": true, 00:09:27.305 "zcopy": true, 00:09:27.305 "get_zone_info": false, 00:09:27.305 "zone_management": false, 00:09:27.305 "zone_append": false, 00:09:27.305 "compare": false, 00:09:27.305 "compare_and_write": false, 00:09:27.305 "abort": true, 00:09:27.305 "seek_hole": false, 00:09:27.305 "seek_data": false, 00:09:27.305 "copy": true, 00:09:27.305 "nvme_iov_md": false 00:09:27.305 }, 00:09:27.305 "memory_domains": [ 00:09:27.305 { 00:09:27.305 "dma_device_id": "system", 00:09:27.305 "dma_device_type": 1 00:09:27.305 }, 00:09:27.305 { 00:09:27.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.305 "dma_device_type": 2 00:09:27.305 } 00:09:27.305 ], 00:09:27.305 "driver_specific": {} 00:09:27.305 } 00:09:27.305 ] 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.305 BaseBdev3 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.305 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.564 [ 00:09:27.564 { 00:09:27.564 "name": "BaseBdev3", 00:09:27.564 "aliases": [ 00:09:27.564 "099f870d-0ed1-4c63-b9c4-ca16249376de" 00:09:27.564 ], 00:09:27.564 "product_name": "Malloc disk", 00:09:27.564 "block_size": 512, 00:09:27.564 "num_blocks": 65536, 00:09:27.564 "uuid": "099f870d-0ed1-4c63-b9c4-ca16249376de", 00:09:27.564 "assigned_rate_limits": { 00:09:27.564 "rw_ios_per_sec": 0, 00:09:27.564 "rw_mbytes_per_sec": 0, 00:09:27.564 "r_mbytes_per_sec": 0, 00:09:27.564 "w_mbytes_per_sec": 0 00:09:27.564 }, 00:09:27.564 "claimed": false, 00:09:27.564 "zoned": false, 00:09:27.564 "supported_io_types": { 00:09:27.564 "read": true, 00:09:27.564 "write": true, 00:09:27.564 "unmap": true, 00:09:27.564 "flush": true, 00:09:27.564 "reset": true, 00:09:27.564 "nvme_admin": false, 00:09:27.564 "nvme_io": false, 00:09:27.564 "nvme_io_md": false, 00:09:27.564 "write_zeroes": true, 00:09:27.564 "zcopy": true, 00:09:27.564 "get_zone_info": false, 00:09:27.564 "zone_management": false, 00:09:27.564 "zone_append": false, 00:09:27.564 "compare": false, 00:09:27.564 "compare_and_write": false, 00:09:27.564 "abort": true, 00:09:27.564 "seek_hole": false, 00:09:27.564 "seek_data": false, 00:09:27.564 "copy": true, 00:09:27.564 "nvme_iov_md": false 00:09:27.564 }, 00:09:27.564 "memory_domains": [ 00:09:27.564 { 00:09:27.564 "dma_device_id": "system", 00:09:27.564 "dma_device_type": 1 00:09:27.564 }, 00:09:27.564 { 00:09:27.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.564 "dma_device_type": 2 00:09:27.564 } 00:09:27.564 ], 00:09:27.564 "driver_specific": {} 00:09:27.564 } 00:09:27.564 ] 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.564 [2024-11-26 17:54:09.183748] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.564 [2024-11-26 17:54:09.183934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.564 [2024-11-26 17:54:09.184007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.564 [2024-11-26 17:54:09.186466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.564 "name": "Existed_Raid", 00:09:27.564 "uuid": "0e525a90-ce37-485a-8bb8-c57227cbd1bb", 00:09:27.564 "strip_size_kb": 64, 00:09:27.564 "state": "configuring", 00:09:27.564 "raid_level": "concat", 00:09:27.564 "superblock": true, 00:09:27.564 "num_base_bdevs": 3, 00:09:27.564 "num_base_bdevs_discovered": 2, 00:09:27.564 "num_base_bdevs_operational": 3, 00:09:27.564 "base_bdevs_list": [ 00:09:27.564 { 00:09:27.564 "name": "BaseBdev1", 00:09:27.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.564 "is_configured": false, 00:09:27.564 "data_offset": 0, 00:09:27.564 "data_size": 0 00:09:27.564 }, 00:09:27.564 { 00:09:27.564 "name": "BaseBdev2", 00:09:27.564 "uuid": "41fe705e-b627-4e7f-b8d7-48613eddd321", 00:09:27.564 "is_configured": true, 00:09:27.564 "data_offset": 2048, 00:09:27.564 "data_size": 63488 00:09:27.564 }, 00:09:27.564 { 00:09:27.564 "name": "BaseBdev3", 00:09:27.564 "uuid": "099f870d-0ed1-4c63-b9c4-ca16249376de", 00:09:27.564 "is_configured": true, 00:09:27.564 "data_offset": 2048, 00:09:27.564 "data_size": 63488 00:09:27.564 } 00:09:27.564 ] 00:09:27.564 }' 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.564 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.823 [2024-11-26 17:54:09.667047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.823 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.081 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.081 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.081 "name": "Existed_Raid", 00:09:28.081 "uuid": "0e525a90-ce37-485a-8bb8-c57227cbd1bb", 00:09:28.081 "strip_size_kb": 64, 00:09:28.081 "state": "configuring", 00:09:28.081 "raid_level": "concat", 00:09:28.081 "superblock": true, 00:09:28.081 "num_base_bdevs": 3, 00:09:28.081 "num_base_bdevs_discovered": 1, 00:09:28.081 "num_base_bdevs_operational": 3, 00:09:28.081 "base_bdevs_list": [ 00:09:28.081 { 00:09:28.081 "name": "BaseBdev1", 00:09:28.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.081 "is_configured": false, 00:09:28.081 "data_offset": 0, 00:09:28.081 "data_size": 0 00:09:28.081 }, 00:09:28.081 { 00:09:28.081 "name": null, 00:09:28.081 "uuid": "41fe705e-b627-4e7f-b8d7-48613eddd321", 00:09:28.081 "is_configured": false, 00:09:28.081 "data_offset": 0, 00:09:28.081 "data_size": 63488 00:09:28.081 }, 00:09:28.081 { 00:09:28.081 "name": "BaseBdev3", 00:09:28.081 "uuid": "099f870d-0ed1-4c63-b9c4-ca16249376de", 00:09:28.081 "is_configured": true, 00:09:28.081 "data_offset": 2048, 00:09:28.081 "data_size": 63488 00:09:28.081 } 00:09:28.081 ] 00:09:28.081 }' 00:09:28.081 17:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.081 17:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.339 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.339 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.339 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.339 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:28.339 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.339 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:28.339 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.339 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.339 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.597 [2024-11-26 17:54:10.202274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.597 BaseBdev1 00:09:28.597 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.597 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:28.597 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:28.597 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.597 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:28.597 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.597 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.597 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.597 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.597 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.597 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.597 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:28.597 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.597 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.597 [ 00:09:28.597 { 00:09:28.597 "name": "BaseBdev1", 00:09:28.597 "aliases": [ 00:09:28.597 "c6faf8da-8f3a-47dc-b4fa-40f0fe9ffe35" 00:09:28.598 ], 00:09:28.598 "product_name": "Malloc disk", 00:09:28.598 "block_size": 512, 00:09:28.598 "num_blocks": 65536, 00:09:28.598 "uuid": "c6faf8da-8f3a-47dc-b4fa-40f0fe9ffe35", 00:09:28.598 "assigned_rate_limits": { 00:09:28.598 "rw_ios_per_sec": 0, 00:09:28.598 "rw_mbytes_per_sec": 0, 00:09:28.598 "r_mbytes_per_sec": 0, 00:09:28.598 "w_mbytes_per_sec": 0 00:09:28.598 }, 00:09:28.598 "claimed": true, 00:09:28.598 "claim_type": "exclusive_write", 00:09:28.598 "zoned": false, 00:09:28.598 "supported_io_types": { 00:09:28.598 "read": true, 00:09:28.598 "write": true, 00:09:28.598 "unmap": true, 00:09:28.598 "flush": true, 00:09:28.598 "reset": true, 00:09:28.598 "nvme_admin": false, 00:09:28.598 "nvme_io": false, 00:09:28.598 "nvme_io_md": false, 00:09:28.598 "write_zeroes": true, 00:09:28.598 "zcopy": true, 00:09:28.598 "get_zone_info": false, 00:09:28.598 "zone_management": false, 00:09:28.598 "zone_append": false, 00:09:28.598 "compare": false, 00:09:28.598 "compare_and_write": false, 00:09:28.598 "abort": true, 00:09:28.598 "seek_hole": false, 00:09:28.598 "seek_data": false, 00:09:28.598 "copy": true, 00:09:28.598 "nvme_iov_md": false 00:09:28.598 }, 00:09:28.598 "memory_domains": [ 00:09:28.598 { 00:09:28.598 "dma_device_id": "system", 00:09:28.598 "dma_device_type": 1 00:09:28.598 }, 00:09:28.598 { 00:09:28.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.598 "dma_device_type": 2 00:09:28.598 } 00:09:28.598 ], 00:09:28.598 "driver_specific": {} 00:09:28.598 } 00:09:28.598 ] 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.598 "name": "Existed_Raid", 00:09:28.598 "uuid": "0e525a90-ce37-485a-8bb8-c57227cbd1bb", 00:09:28.598 "strip_size_kb": 64, 00:09:28.598 "state": "configuring", 00:09:28.598 "raid_level": "concat", 00:09:28.598 "superblock": true, 00:09:28.598 "num_base_bdevs": 3, 00:09:28.598 "num_base_bdevs_discovered": 2, 00:09:28.598 "num_base_bdevs_operational": 3, 00:09:28.598 "base_bdevs_list": [ 00:09:28.598 { 00:09:28.598 "name": "BaseBdev1", 00:09:28.598 "uuid": "c6faf8da-8f3a-47dc-b4fa-40f0fe9ffe35", 00:09:28.598 "is_configured": true, 00:09:28.598 "data_offset": 2048, 00:09:28.598 "data_size": 63488 00:09:28.598 }, 00:09:28.598 { 00:09:28.598 "name": null, 00:09:28.598 "uuid": "41fe705e-b627-4e7f-b8d7-48613eddd321", 00:09:28.598 "is_configured": false, 00:09:28.598 "data_offset": 0, 00:09:28.598 "data_size": 63488 00:09:28.598 }, 00:09:28.598 { 00:09:28.598 "name": "BaseBdev3", 00:09:28.598 "uuid": "099f870d-0ed1-4c63-b9c4-ca16249376de", 00:09:28.598 "is_configured": true, 00:09:28.598 "data_offset": 2048, 00:09:28.598 "data_size": 63488 00:09:28.598 } 00:09:28.598 ] 00:09:28.598 }' 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.598 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.856 [2024-11-26 17:54:10.665641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.856 "name": "Existed_Raid", 00:09:28.856 "uuid": "0e525a90-ce37-485a-8bb8-c57227cbd1bb", 00:09:28.856 "strip_size_kb": 64, 00:09:28.856 "state": "configuring", 00:09:28.856 "raid_level": "concat", 00:09:28.856 "superblock": true, 00:09:28.856 "num_base_bdevs": 3, 00:09:28.856 "num_base_bdevs_discovered": 1, 00:09:28.856 "num_base_bdevs_operational": 3, 00:09:28.856 "base_bdevs_list": [ 00:09:28.856 { 00:09:28.856 "name": "BaseBdev1", 00:09:28.856 "uuid": "c6faf8da-8f3a-47dc-b4fa-40f0fe9ffe35", 00:09:28.856 "is_configured": true, 00:09:28.856 "data_offset": 2048, 00:09:28.856 "data_size": 63488 00:09:28.856 }, 00:09:28.856 { 00:09:28.856 "name": null, 00:09:28.856 "uuid": "41fe705e-b627-4e7f-b8d7-48613eddd321", 00:09:28.856 "is_configured": false, 00:09:28.856 "data_offset": 0, 00:09:28.856 "data_size": 63488 00:09:28.856 }, 00:09:28.856 { 00:09:28.856 "name": null, 00:09:28.856 "uuid": "099f870d-0ed1-4c63-b9c4-ca16249376de", 00:09:28.856 "is_configured": false, 00:09:28.856 "data_offset": 0, 00:09:28.856 "data_size": 63488 00:09:28.856 } 00:09:28.856 ] 00:09:28.856 }' 00:09:28.856 17:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.857 17:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.421 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:29.421 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.421 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.421 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.421 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.421 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:29.421 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:29.421 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.421 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.421 [2024-11-26 17:54:11.189165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.421 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.422 "name": "Existed_Raid", 00:09:29.422 "uuid": "0e525a90-ce37-485a-8bb8-c57227cbd1bb", 00:09:29.422 "strip_size_kb": 64, 00:09:29.422 "state": "configuring", 00:09:29.422 "raid_level": "concat", 00:09:29.422 "superblock": true, 00:09:29.422 "num_base_bdevs": 3, 00:09:29.422 "num_base_bdevs_discovered": 2, 00:09:29.422 "num_base_bdevs_operational": 3, 00:09:29.422 "base_bdevs_list": [ 00:09:29.422 { 00:09:29.422 "name": "BaseBdev1", 00:09:29.422 "uuid": "c6faf8da-8f3a-47dc-b4fa-40f0fe9ffe35", 00:09:29.422 "is_configured": true, 00:09:29.422 "data_offset": 2048, 00:09:29.422 "data_size": 63488 00:09:29.422 }, 00:09:29.422 { 00:09:29.422 "name": null, 00:09:29.422 "uuid": "41fe705e-b627-4e7f-b8d7-48613eddd321", 00:09:29.422 "is_configured": false, 00:09:29.422 "data_offset": 0, 00:09:29.422 "data_size": 63488 00:09:29.422 }, 00:09:29.422 { 00:09:29.422 "name": "BaseBdev3", 00:09:29.422 "uuid": "099f870d-0ed1-4c63-b9c4-ca16249376de", 00:09:29.422 "is_configured": true, 00:09:29.422 "data_offset": 2048, 00:09:29.422 "data_size": 63488 00:09:29.422 } 00:09:29.422 ] 00:09:29.422 }' 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.422 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.987 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:29.987 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.987 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.987 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.987 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.987 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:29.987 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:29.987 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.988 [2024-11-26 17:54:11.653149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.988 "name": "Existed_Raid", 00:09:29.988 "uuid": "0e525a90-ce37-485a-8bb8-c57227cbd1bb", 00:09:29.988 "strip_size_kb": 64, 00:09:29.988 "state": "configuring", 00:09:29.988 "raid_level": "concat", 00:09:29.988 "superblock": true, 00:09:29.988 "num_base_bdevs": 3, 00:09:29.988 "num_base_bdevs_discovered": 1, 00:09:29.988 "num_base_bdevs_operational": 3, 00:09:29.988 "base_bdevs_list": [ 00:09:29.988 { 00:09:29.988 "name": null, 00:09:29.988 "uuid": "c6faf8da-8f3a-47dc-b4fa-40f0fe9ffe35", 00:09:29.988 "is_configured": false, 00:09:29.988 "data_offset": 0, 00:09:29.988 "data_size": 63488 00:09:29.988 }, 00:09:29.988 { 00:09:29.988 "name": null, 00:09:29.988 "uuid": "41fe705e-b627-4e7f-b8d7-48613eddd321", 00:09:29.988 "is_configured": false, 00:09:29.988 "data_offset": 0, 00:09:29.988 "data_size": 63488 00:09:29.988 }, 00:09:29.988 { 00:09:29.988 "name": "BaseBdev3", 00:09:29.988 "uuid": "099f870d-0ed1-4c63-b9c4-ca16249376de", 00:09:29.988 "is_configured": true, 00:09:29.988 "data_offset": 2048, 00:09:29.988 "data_size": 63488 00:09:29.988 } 00:09:29.988 ] 00:09:29.988 }' 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.988 17:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.557 [2024-11-26 17:54:12.283593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.557 "name": "Existed_Raid", 00:09:30.557 "uuid": "0e525a90-ce37-485a-8bb8-c57227cbd1bb", 00:09:30.557 "strip_size_kb": 64, 00:09:30.557 "state": "configuring", 00:09:30.557 "raid_level": "concat", 00:09:30.557 "superblock": true, 00:09:30.557 "num_base_bdevs": 3, 00:09:30.557 "num_base_bdevs_discovered": 2, 00:09:30.557 "num_base_bdevs_operational": 3, 00:09:30.557 "base_bdevs_list": [ 00:09:30.557 { 00:09:30.557 "name": null, 00:09:30.557 "uuid": "c6faf8da-8f3a-47dc-b4fa-40f0fe9ffe35", 00:09:30.557 "is_configured": false, 00:09:30.557 "data_offset": 0, 00:09:30.557 "data_size": 63488 00:09:30.557 }, 00:09:30.557 { 00:09:30.557 "name": "BaseBdev2", 00:09:30.557 "uuid": "41fe705e-b627-4e7f-b8d7-48613eddd321", 00:09:30.557 "is_configured": true, 00:09:30.557 "data_offset": 2048, 00:09:30.557 "data_size": 63488 00:09:30.557 }, 00:09:30.557 { 00:09:30.557 "name": "BaseBdev3", 00:09:30.557 "uuid": "099f870d-0ed1-4c63-b9c4-ca16249376de", 00:09:30.557 "is_configured": true, 00:09:30.557 "data_offset": 2048, 00:09:30.557 "data_size": 63488 00:09:30.557 } 00:09:30.557 ] 00:09:30.557 }' 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.557 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c6faf8da-8f3a-47dc-b4fa-40f0fe9ffe35 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.128 [2024-11-26 17:54:12.824105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:31.128 [2024-11-26 17:54:12.824509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:31.128 [2024-11-26 17:54:12.824567] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:31.128 [2024-11-26 17:54:12.824943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:31.128 [2024-11-26 17:54:12.825179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:31.128 [2024-11-26 17:54:12.825231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:09:31.128 id_bdev 0x617000008200 00:09:31.128 [2024-11-26 17:54:12.825523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.128 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.128 [ 00:09:31.128 { 00:09:31.128 "name": "NewBaseBdev", 00:09:31.128 "aliases": [ 00:09:31.128 "c6faf8da-8f3a-47dc-b4fa-40f0fe9ffe35" 00:09:31.128 ], 00:09:31.128 "product_name": "Malloc disk", 00:09:31.128 "block_size": 512, 00:09:31.128 "num_blocks": 65536, 00:09:31.129 "uuid": "c6faf8da-8f3a-47dc-b4fa-40f0fe9ffe35", 00:09:31.129 "assigned_rate_limits": { 00:09:31.129 "rw_ios_per_sec": 0, 00:09:31.129 "rw_mbytes_per_sec": 0, 00:09:31.129 "r_mbytes_per_sec": 0, 00:09:31.129 "w_mbytes_per_sec": 0 00:09:31.129 }, 00:09:31.129 "claimed": true, 00:09:31.129 "claim_type": "exclusive_write", 00:09:31.129 "zoned": false, 00:09:31.129 "supported_io_types": { 00:09:31.129 "read": true, 00:09:31.129 "write": true, 00:09:31.129 "unmap": true, 00:09:31.129 "flush": true, 00:09:31.129 "reset": true, 00:09:31.129 "nvme_admin": false, 00:09:31.129 "nvme_io": false, 00:09:31.129 "nvme_io_md": false, 00:09:31.129 "write_zeroes": true, 00:09:31.129 "zcopy": true, 00:09:31.129 "get_zone_info": false, 00:09:31.129 "zone_management": false, 00:09:31.129 "zone_append": false, 00:09:31.129 "compare": false, 00:09:31.129 "compare_and_write": false, 00:09:31.129 "abort": true, 00:09:31.129 "seek_hole": false, 00:09:31.129 "seek_data": false, 00:09:31.129 "copy": true, 00:09:31.129 "nvme_iov_md": false 00:09:31.129 }, 00:09:31.129 "memory_domains": [ 00:09:31.129 { 00:09:31.129 "dma_device_id": "system", 00:09:31.129 "dma_device_type": 1 00:09:31.129 }, 00:09:31.129 { 00:09:31.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.129 "dma_device_type": 2 00:09:31.129 } 00:09:31.129 ], 00:09:31.129 "driver_specific": {} 00:09:31.129 } 00:09:31.129 ] 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.129 "name": "Existed_Raid", 00:09:31.129 "uuid": "0e525a90-ce37-485a-8bb8-c57227cbd1bb", 00:09:31.129 "strip_size_kb": 64, 00:09:31.129 "state": "online", 00:09:31.129 "raid_level": "concat", 00:09:31.129 "superblock": true, 00:09:31.129 "num_base_bdevs": 3, 00:09:31.129 "num_base_bdevs_discovered": 3, 00:09:31.129 "num_base_bdevs_operational": 3, 00:09:31.129 "base_bdevs_list": [ 00:09:31.129 { 00:09:31.129 "name": "NewBaseBdev", 00:09:31.129 "uuid": "c6faf8da-8f3a-47dc-b4fa-40f0fe9ffe35", 00:09:31.129 "is_configured": true, 00:09:31.129 "data_offset": 2048, 00:09:31.129 "data_size": 63488 00:09:31.129 }, 00:09:31.129 { 00:09:31.129 "name": "BaseBdev2", 00:09:31.129 "uuid": "41fe705e-b627-4e7f-b8d7-48613eddd321", 00:09:31.129 "is_configured": true, 00:09:31.129 "data_offset": 2048, 00:09:31.129 "data_size": 63488 00:09:31.129 }, 00:09:31.129 { 00:09:31.129 "name": "BaseBdev3", 00:09:31.129 "uuid": "099f870d-0ed1-4c63-b9c4-ca16249376de", 00:09:31.129 "is_configured": true, 00:09:31.129 "data_offset": 2048, 00:09:31.129 "data_size": 63488 00:09:31.129 } 00:09:31.129 ] 00:09:31.129 }' 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.129 17:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.700 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:31.700 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:31.700 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:31.700 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:31.700 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:31.700 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:31.700 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:31.700 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:31.700 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.700 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.700 [2024-11-26 17:54:13.383613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.700 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.700 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.700 "name": "Existed_Raid", 00:09:31.700 "aliases": [ 00:09:31.700 "0e525a90-ce37-485a-8bb8-c57227cbd1bb" 00:09:31.700 ], 00:09:31.700 "product_name": "Raid Volume", 00:09:31.700 "block_size": 512, 00:09:31.700 "num_blocks": 190464, 00:09:31.700 "uuid": "0e525a90-ce37-485a-8bb8-c57227cbd1bb", 00:09:31.700 "assigned_rate_limits": { 00:09:31.700 "rw_ios_per_sec": 0, 00:09:31.700 "rw_mbytes_per_sec": 0, 00:09:31.700 "r_mbytes_per_sec": 0, 00:09:31.700 "w_mbytes_per_sec": 0 00:09:31.700 }, 00:09:31.700 "claimed": false, 00:09:31.700 "zoned": false, 00:09:31.700 "supported_io_types": { 00:09:31.700 "read": true, 00:09:31.700 "write": true, 00:09:31.700 "unmap": true, 00:09:31.700 "flush": true, 00:09:31.700 "reset": true, 00:09:31.700 "nvme_admin": false, 00:09:31.700 "nvme_io": false, 00:09:31.700 "nvme_io_md": false, 00:09:31.700 "write_zeroes": true, 00:09:31.700 "zcopy": false, 00:09:31.700 "get_zone_info": false, 00:09:31.700 "zone_management": false, 00:09:31.700 "zone_append": false, 00:09:31.700 "compare": false, 00:09:31.700 "compare_and_write": false, 00:09:31.700 "abort": false, 00:09:31.700 "seek_hole": false, 00:09:31.700 "seek_data": false, 00:09:31.700 "copy": false, 00:09:31.700 "nvme_iov_md": false 00:09:31.700 }, 00:09:31.700 "memory_domains": [ 00:09:31.700 { 00:09:31.700 "dma_device_id": "system", 00:09:31.700 "dma_device_type": 1 00:09:31.700 }, 00:09:31.700 { 00:09:31.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.700 "dma_device_type": 2 00:09:31.700 }, 00:09:31.700 { 00:09:31.700 "dma_device_id": "system", 00:09:31.700 "dma_device_type": 1 00:09:31.700 }, 00:09:31.700 { 00:09:31.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.700 "dma_device_type": 2 00:09:31.700 }, 00:09:31.700 { 00:09:31.700 "dma_device_id": "system", 00:09:31.700 "dma_device_type": 1 00:09:31.700 }, 00:09:31.700 { 00:09:31.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.700 "dma_device_type": 2 00:09:31.700 } 00:09:31.700 ], 00:09:31.700 "driver_specific": { 00:09:31.700 "raid": { 00:09:31.700 "uuid": "0e525a90-ce37-485a-8bb8-c57227cbd1bb", 00:09:31.700 "strip_size_kb": 64, 00:09:31.700 "state": "online", 00:09:31.700 "raid_level": "concat", 00:09:31.700 "superblock": true, 00:09:31.700 "num_base_bdevs": 3, 00:09:31.700 "num_base_bdevs_discovered": 3, 00:09:31.700 "num_base_bdevs_operational": 3, 00:09:31.700 "base_bdevs_list": [ 00:09:31.700 { 00:09:31.700 "name": "NewBaseBdev", 00:09:31.700 "uuid": "c6faf8da-8f3a-47dc-b4fa-40f0fe9ffe35", 00:09:31.700 "is_configured": true, 00:09:31.700 "data_offset": 2048, 00:09:31.700 "data_size": 63488 00:09:31.700 }, 00:09:31.700 { 00:09:31.700 "name": "BaseBdev2", 00:09:31.700 "uuid": "41fe705e-b627-4e7f-b8d7-48613eddd321", 00:09:31.700 "is_configured": true, 00:09:31.700 "data_offset": 2048, 00:09:31.700 "data_size": 63488 00:09:31.700 }, 00:09:31.700 { 00:09:31.700 "name": "BaseBdev3", 00:09:31.700 "uuid": "099f870d-0ed1-4c63-b9c4-ca16249376de", 00:09:31.700 "is_configured": true, 00:09:31.700 "data_offset": 2048, 00:09:31.700 "data_size": 63488 00:09:31.700 } 00:09:31.700 ] 00:09:31.700 } 00:09:31.700 } 00:09:31.700 }' 00:09:31.700 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.700 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:31.700 BaseBdev2 00:09:31.700 BaseBdev3' 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.701 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.960 [2024-11-26 17:54:13.642883] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.960 [2024-11-26 17:54:13.643052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.960 [2024-11-26 17:54:13.643187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.960 [2024-11-26 17:54:13.643257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.960 [2024-11-26 17:54:13.643273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66435 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66435 ']' 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66435 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66435 00:09:31.960 killing process with pid 66435 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66435' 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66435 00:09:31.960 [2024-11-26 17:54:13.692802] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.960 17:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66435 00:09:32.219 [2024-11-26 17:54:14.042057] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.600 ************************************ 00:09:33.600 END TEST raid_state_function_test_sb 00:09:33.600 ************************************ 00:09:33.600 17:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:33.600 00:09:33.600 real 0m11.318s 00:09:33.600 user 0m17.781s 00:09:33.600 sys 0m1.915s 00:09:33.600 17:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.600 17:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.600 17:54:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:33.600 17:54:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:33.600 17:54:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.600 17:54:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.600 ************************************ 00:09:33.600 START TEST raid_superblock_test 00:09:33.600 ************************************ 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67061 00:09:33.600 17:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67061 00:09:33.859 17:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67061 ']' 00:09:33.859 17:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.859 17:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.859 17:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.859 17:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.859 17:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.860 [2024-11-26 17:54:15.557813] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:09:33.860 [2024-11-26 17:54:15.558129] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67061 ] 00:09:34.118 [2024-11-26 17:54:15.732760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.118 [2024-11-26 17:54:15.888732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.376 [2024-11-26 17:54:16.146186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.376 [2024-11-26 17:54:16.146289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.942 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.942 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:34.942 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:34.942 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:34.942 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:34.942 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:34.942 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:34.942 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:34.942 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:34.942 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:34.942 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:34.942 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.942 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.943 malloc1 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.943 [2024-11-26 17:54:16.755948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:34.943 [2024-11-26 17:54:16.756072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.943 [2024-11-26 17:54:16.756109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:34.943 [2024-11-26 17:54:16.756123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.943 [2024-11-26 17:54:16.759097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.943 [2024-11-26 17:54:16.759169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:34.943 pt1 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.943 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.201 malloc2 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.201 [2024-11-26 17:54:16.818801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:35.201 [2024-11-26 17:54:16.818989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.201 [2024-11-26 17:54:16.819048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:35.201 [2024-11-26 17:54:16.819063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.201 [2024-11-26 17:54:16.821957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.201 [2024-11-26 17:54:16.822106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:35.201 pt2 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.201 malloc3 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.201 [2024-11-26 17:54:16.891289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:35.201 [2024-11-26 17:54:16.891383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.201 [2024-11-26 17:54:16.891415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:35.201 [2024-11-26 17:54:16.891427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.201 [2024-11-26 17:54:16.894327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.201 [2024-11-26 17:54:16.894392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:35.201 pt3 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.201 [2024-11-26 17:54:16.899456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:35.201 [2024-11-26 17:54:16.901929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:35.201 [2024-11-26 17:54:16.902052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:35.201 [2024-11-26 17:54:16.902285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:35.201 [2024-11-26 17:54:16.902311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:35.201 [2024-11-26 17:54:16.902688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:35.201 [2024-11-26 17:54:16.902916] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:35.201 [2024-11-26 17:54:16.902927] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:35.201 [2024-11-26 17:54:16.903194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.201 "name": "raid_bdev1", 00:09:35.201 "uuid": "830a868a-c4c4-47c2-a985-de512360502c", 00:09:35.201 "strip_size_kb": 64, 00:09:35.201 "state": "online", 00:09:35.201 "raid_level": "concat", 00:09:35.201 "superblock": true, 00:09:35.201 "num_base_bdevs": 3, 00:09:35.201 "num_base_bdevs_discovered": 3, 00:09:35.201 "num_base_bdevs_operational": 3, 00:09:35.201 "base_bdevs_list": [ 00:09:35.201 { 00:09:35.201 "name": "pt1", 00:09:35.201 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.201 "is_configured": true, 00:09:35.201 "data_offset": 2048, 00:09:35.201 "data_size": 63488 00:09:35.201 }, 00:09:35.201 { 00:09:35.201 "name": "pt2", 00:09:35.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.201 "is_configured": true, 00:09:35.201 "data_offset": 2048, 00:09:35.201 "data_size": 63488 00:09:35.201 }, 00:09:35.201 { 00:09:35.201 "name": "pt3", 00:09:35.201 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.201 "is_configured": true, 00:09:35.201 "data_offset": 2048, 00:09:35.201 "data_size": 63488 00:09:35.201 } 00:09:35.201 ] 00:09:35.201 }' 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.201 17:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.767 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:35.767 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:35.767 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.767 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.767 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.767 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.767 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.767 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.767 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.767 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.767 [2024-11-26 17:54:17.407077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.767 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.767 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.767 "name": "raid_bdev1", 00:09:35.767 "aliases": [ 00:09:35.767 "830a868a-c4c4-47c2-a985-de512360502c" 00:09:35.767 ], 00:09:35.767 "product_name": "Raid Volume", 00:09:35.767 "block_size": 512, 00:09:35.767 "num_blocks": 190464, 00:09:35.767 "uuid": "830a868a-c4c4-47c2-a985-de512360502c", 00:09:35.767 "assigned_rate_limits": { 00:09:35.767 "rw_ios_per_sec": 0, 00:09:35.767 "rw_mbytes_per_sec": 0, 00:09:35.767 "r_mbytes_per_sec": 0, 00:09:35.767 "w_mbytes_per_sec": 0 00:09:35.767 }, 00:09:35.767 "claimed": false, 00:09:35.767 "zoned": false, 00:09:35.767 "supported_io_types": { 00:09:35.767 "read": true, 00:09:35.767 "write": true, 00:09:35.767 "unmap": true, 00:09:35.767 "flush": true, 00:09:35.767 "reset": true, 00:09:35.767 "nvme_admin": false, 00:09:35.767 "nvme_io": false, 00:09:35.767 "nvme_io_md": false, 00:09:35.767 "write_zeroes": true, 00:09:35.767 "zcopy": false, 00:09:35.767 "get_zone_info": false, 00:09:35.767 "zone_management": false, 00:09:35.767 "zone_append": false, 00:09:35.767 "compare": false, 00:09:35.768 "compare_and_write": false, 00:09:35.768 "abort": false, 00:09:35.768 "seek_hole": false, 00:09:35.768 "seek_data": false, 00:09:35.768 "copy": false, 00:09:35.768 "nvme_iov_md": false 00:09:35.768 }, 00:09:35.768 "memory_domains": [ 00:09:35.768 { 00:09:35.768 "dma_device_id": "system", 00:09:35.768 "dma_device_type": 1 00:09:35.768 }, 00:09:35.768 { 00:09:35.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.768 "dma_device_type": 2 00:09:35.768 }, 00:09:35.768 { 00:09:35.768 "dma_device_id": "system", 00:09:35.768 "dma_device_type": 1 00:09:35.768 }, 00:09:35.768 { 00:09:35.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.768 "dma_device_type": 2 00:09:35.768 }, 00:09:35.768 { 00:09:35.768 "dma_device_id": "system", 00:09:35.768 "dma_device_type": 1 00:09:35.768 }, 00:09:35.768 { 00:09:35.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.768 "dma_device_type": 2 00:09:35.768 } 00:09:35.768 ], 00:09:35.768 "driver_specific": { 00:09:35.768 "raid": { 00:09:35.768 "uuid": "830a868a-c4c4-47c2-a985-de512360502c", 00:09:35.768 "strip_size_kb": 64, 00:09:35.768 "state": "online", 00:09:35.768 "raid_level": "concat", 00:09:35.768 "superblock": true, 00:09:35.768 "num_base_bdevs": 3, 00:09:35.768 "num_base_bdevs_discovered": 3, 00:09:35.768 "num_base_bdevs_operational": 3, 00:09:35.768 "base_bdevs_list": [ 00:09:35.768 { 00:09:35.768 "name": "pt1", 00:09:35.768 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.768 "is_configured": true, 00:09:35.768 "data_offset": 2048, 00:09:35.768 "data_size": 63488 00:09:35.768 }, 00:09:35.768 { 00:09:35.768 "name": "pt2", 00:09:35.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.768 "is_configured": true, 00:09:35.768 "data_offset": 2048, 00:09:35.768 "data_size": 63488 00:09:35.768 }, 00:09:35.768 { 00:09:35.768 "name": "pt3", 00:09:35.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.768 "is_configured": true, 00:09:35.768 "data_offset": 2048, 00:09:35.768 "data_size": 63488 00:09:35.768 } 00:09:35.768 ] 00:09:35.768 } 00:09:35.768 } 00:09:35.768 }' 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:35.768 pt2 00:09:35.768 pt3' 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.768 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.026 [2024-11-26 17:54:17.690604] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=830a868a-c4c4-47c2-a985-de512360502c 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 830a868a-c4c4-47c2-a985-de512360502c ']' 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.026 [2024-11-26 17:54:17.738160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.026 [2024-11-26 17:54:17.738222] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.026 [2024-11-26 17:54:17.738363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.026 [2024-11-26 17:54:17.738448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.026 [2024-11-26 17:54:17.738461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.026 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.027 [2024-11-26 17:54:17.873985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:36.027 [2024-11-26 17:54:17.876474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:36.027 [2024-11-26 17:54:17.876567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:36.027 [2024-11-26 17:54:17.876648] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:36.027 [2024-11-26 17:54:17.876724] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:36.027 [2024-11-26 17:54:17.876749] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:36.027 [2024-11-26 17:54:17.876770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.027 [2024-11-26 17:54:17.876783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:36.027 request: 00:09:36.027 { 00:09:36.027 "name": "raid_bdev1", 00:09:36.027 "raid_level": "concat", 00:09:36.027 "base_bdevs": [ 00:09:36.027 "malloc1", 00:09:36.027 "malloc2", 00:09:36.027 "malloc3" 00:09:36.027 ], 00:09:36.027 "strip_size_kb": 64, 00:09:36.027 "superblock": false, 00:09:36.027 "method": "bdev_raid_create", 00:09:36.027 "req_id": 1 00:09:36.027 } 00:09:36.027 Got JSON-RPC error response 00:09:36.027 response: 00:09:36.027 { 00:09:36.027 "code": -17, 00:09:36.027 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:36.027 } 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.027 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.286 [2024-11-26 17:54:17.941797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:36.286 [2024-11-26 17:54:17.942029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.286 [2024-11-26 17:54:17.942110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:36.286 [2024-11-26 17:54:17.942151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.286 [2024-11-26 17:54:17.945114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.286 [2024-11-26 17:54:17.945276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:36.286 [2024-11-26 17:54:17.945439] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:36.286 [2024-11-26 17:54:17.945530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:36.286 pt1 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.286 "name": "raid_bdev1", 00:09:36.286 "uuid": "830a868a-c4c4-47c2-a985-de512360502c", 00:09:36.286 "strip_size_kb": 64, 00:09:36.286 "state": "configuring", 00:09:36.286 "raid_level": "concat", 00:09:36.286 "superblock": true, 00:09:36.286 "num_base_bdevs": 3, 00:09:36.286 "num_base_bdevs_discovered": 1, 00:09:36.286 "num_base_bdevs_operational": 3, 00:09:36.286 "base_bdevs_list": [ 00:09:36.286 { 00:09:36.286 "name": "pt1", 00:09:36.286 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:36.286 "is_configured": true, 00:09:36.286 "data_offset": 2048, 00:09:36.286 "data_size": 63488 00:09:36.286 }, 00:09:36.286 { 00:09:36.286 "name": null, 00:09:36.286 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.286 "is_configured": false, 00:09:36.286 "data_offset": 2048, 00:09:36.286 "data_size": 63488 00:09:36.286 }, 00:09:36.286 { 00:09:36.286 "name": null, 00:09:36.286 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.286 "is_configured": false, 00:09:36.286 "data_offset": 2048, 00:09:36.286 "data_size": 63488 00:09:36.286 } 00:09:36.286 ] 00:09:36.286 }' 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.286 17:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.564 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:36.822 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:36.822 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.822 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.822 [2024-11-26 17:54:18.433286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:36.822 [2024-11-26 17:54:18.433507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.822 [2024-11-26 17:54:18.433581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:36.822 [2024-11-26 17:54:18.433622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.822 [2024-11-26 17:54:18.434254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.822 [2024-11-26 17:54:18.434347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:36.822 [2024-11-26 17:54:18.434519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:36.822 [2024-11-26 17:54:18.434598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:36.822 pt2 00:09:36.822 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.822 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:36.822 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.822 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.822 [2024-11-26 17:54:18.445342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:36.822 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.822 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:36.822 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.822 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.822 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.822 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.822 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.822 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.823 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.823 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.823 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.823 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.823 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.823 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.823 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.823 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.823 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.823 "name": "raid_bdev1", 00:09:36.823 "uuid": "830a868a-c4c4-47c2-a985-de512360502c", 00:09:36.823 "strip_size_kb": 64, 00:09:36.823 "state": "configuring", 00:09:36.823 "raid_level": "concat", 00:09:36.823 "superblock": true, 00:09:36.823 "num_base_bdevs": 3, 00:09:36.823 "num_base_bdevs_discovered": 1, 00:09:36.823 "num_base_bdevs_operational": 3, 00:09:36.823 "base_bdevs_list": [ 00:09:36.823 { 00:09:36.823 "name": "pt1", 00:09:36.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:36.823 "is_configured": true, 00:09:36.823 "data_offset": 2048, 00:09:36.823 "data_size": 63488 00:09:36.823 }, 00:09:36.823 { 00:09:36.823 "name": null, 00:09:36.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.823 "is_configured": false, 00:09:36.823 "data_offset": 0, 00:09:36.823 "data_size": 63488 00:09:36.823 }, 00:09:36.823 { 00:09:36.823 "name": null, 00:09:36.823 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.823 "is_configured": false, 00:09:36.823 "data_offset": 2048, 00:09:36.823 "data_size": 63488 00:09:36.823 } 00:09:36.823 ] 00:09:36.823 }' 00:09:36.823 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.823 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.081 [2024-11-26 17:54:18.897067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:37.081 [2024-11-26 17:54:18.897251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.081 [2024-11-26 17:54:18.897305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:37.081 [2024-11-26 17:54:18.897346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.081 [2024-11-26 17:54:18.897993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.081 [2024-11-26 17:54:18.898100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:37.081 [2024-11-26 17:54:18.898266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:37.081 [2024-11-26 17:54:18.898333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:37.081 pt2 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.081 [2024-11-26 17:54:18.909078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:37.081 [2024-11-26 17:54:18.909235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.081 [2024-11-26 17:54:18.909284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:37.081 [2024-11-26 17:54:18.909322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.081 [2024-11-26 17:54:18.909928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.081 [2024-11-26 17:54:18.910014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:37.081 [2024-11-26 17:54:18.910164] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:37.081 [2024-11-26 17:54:18.910230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:37.081 [2024-11-26 17:54:18.910429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:37.081 [2024-11-26 17:54:18.910476] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:37.081 [2024-11-26 17:54:18.910821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:37.081 [2024-11-26 17:54:18.911066] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:37.081 [2024-11-26 17:54:18.911114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:37.081 [2024-11-26 17:54:18.911347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.081 pt3 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.081 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.338 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.339 "name": "raid_bdev1", 00:09:37.339 "uuid": "830a868a-c4c4-47c2-a985-de512360502c", 00:09:37.339 "strip_size_kb": 64, 00:09:37.339 "state": "online", 00:09:37.339 "raid_level": "concat", 00:09:37.339 "superblock": true, 00:09:37.339 "num_base_bdevs": 3, 00:09:37.339 "num_base_bdevs_discovered": 3, 00:09:37.339 "num_base_bdevs_operational": 3, 00:09:37.339 "base_bdevs_list": [ 00:09:37.339 { 00:09:37.339 "name": "pt1", 00:09:37.339 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:37.339 "is_configured": true, 00:09:37.339 "data_offset": 2048, 00:09:37.339 "data_size": 63488 00:09:37.339 }, 00:09:37.339 { 00:09:37.339 "name": "pt2", 00:09:37.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.339 "is_configured": true, 00:09:37.339 "data_offset": 2048, 00:09:37.339 "data_size": 63488 00:09:37.339 }, 00:09:37.339 { 00:09:37.339 "name": "pt3", 00:09:37.339 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:37.339 "is_configured": true, 00:09:37.339 "data_offset": 2048, 00:09:37.339 "data_size": 63488 00:09:37.339 } 00:09:37.339 ] 00:09:37.339 }' 00:09:37.339 17:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.339 17:54:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.597 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:37.597 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:37.597 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:37.597 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:37.597 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:37.597 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:37.597 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:37.597 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:37.597 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.597 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.597 [2024-11-26 17:54:19.409441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.597 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.597 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:37.597 "name": "raid_bdev1", 00:09:37.597 "aliases": [ 00:09:37.597 "830a868a-c4c4-47c2-a985-de512360502c" 00:09:37.597 ], 00:09:37.597 "product_name": "Raid Volume", 00:09:37.597 "block_size": 512, 00:09:37.597 "num_blocks": 190464, 00:09:37.597 "uuid": "830a868a-c4c4-47c2-a985-de512360502c", 00:09:37.597 "assigned_rate_limits": { 00:09:37.597 "rw_ios_per_sec": 0, 00:09:37.597 "rw_mbytes_per_sec": 0, 00:09:37.597 "r_mbytes_per_sec": 0, 00:09:37.597 "w_mbytes_per_sec": 0 00:09:37.597 }, 00:09:37.597 "claimed": false, 00:09:37.597 "zoned": false, 00:09:37.597 "supported_io_types": { 00:09:37.597 "read": true, 00:09:37.597 "write": true, 00:09:37.597 "unmap": true, 00:09:37.597 "flush": true, 00:09:37.597 "reset": true, 00:09:37.597 "nvme_admin": false, 00:09:37.597 "nvme_io": false, 00:09:37.597 "nvme_io_md": false, 00:09:37.597 "write_zeroes": true, 00:09:37.597 "zcopy": false, 00:09:37.597 "get_zone_info": false, 00:09:37.597 "zone_management": false, 00:09:37.597 "zone_append": false, 00:09:37.597 "compare": false, 00:09:37.597 "compare_and_write": false, 00:09:37.597 "abort": false, 00:09:37.597 "seek_hole": false, 00:09:37.597 "seek_data": false, 00:09:37.597 "copy": false, 00:09:37.597 "nvme_iov_md": false 00:09:37.597 }, 00:09:37.597 "memory_domains": [ 00:09:37.597 { 00:09:37.597 "dma_device_id": "system", 00:09:37.597 "dma_device_type": 1 00:09:37.597 }, 00:09:37.597 { 00:09:37.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.597 "dma_device_type": 2 00:09:37.597 }, 00:09:37.597 { 00:09:37.597 "dma_device_id": "system", 00:09:37.597 "dma_device_type": 1 00:09:37.597 }, 00:09:37.597 { 00:09:37.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.597 "dma_device_type": 2 00:09:37.597 }, 00:09:37.597 { 00:09:37.597 "dma_device_id": "system", 00:09:37.597 "dma_device_type": 1 00:09:37.597 }, 00:09:37.597 { 00:09:37.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.597 "dma_device_type": 2 00:09:37.597 } 00:09:37.597 ], 00:09:37.597 "driver_specific": { 00:09:37.597 "raid": { 00:09:37.597 "uuid": "830a868a-c4c4-47c2-a985-de512360502c", 00:09:37.597 "strip_size_kb": 64, 00:09:37.597 "state": "online", 00:09:37.597 "raid_level": "concat", 00:09:37.597 "superblock": true, 00:09:37.597 "num_base_bdevs": 3, 00:09:37.597 "num_base_bdevs_discovered": 3, 00:09:37.597 "num_base_bdevs_operational": 3, 00:09:37.597 "base_bdevs_list": [ 00:09:37.597 { 00:09:37.597 "name": "pt1", 00:09:37.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:37.597 "is_configured": true, 00:09:37.597 "data_offset": 2048, 00:09:37.597 "data_size": 63488 00:09:37.597 }, 00:09:37.597 { 00:09:37.597 "name": "pt2", 00:09:37.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.597 "is_configured": true, 00:09:37.597 "data_offset": 2048, 00:09:37.597 "data_size": 63488 00:09:37.597 }, 00:09:37.597 { 00:09:37.597 "name": "pt3", 00:09:37.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:37.597 "is_configured": true, 00:09:37.597 "data_offset": 2048, 00:09:37.597 "data_size": 63488 00:09:37.597 } 00:09:37.597 ] 00:09:37.597 } 00:09:37.597 } 00:09:37.597 }' 00:09:37.597 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:37.857 pt2 00:09:37.857 pt3' 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:37.857 [2024-11-26 17:54:19.661432] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 830a868a-c4c4-47c2-a985-de512360502c '!=' 830a868a-c4c4-47c2-a985-de512360502c ']' 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67061 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67061 ']' 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67061 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.857 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67061 00:09:38.116 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.116 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.116 killing process with pid 67061 00:09:38.116 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67061' 00:09:38.117 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67061 00:09:38.117 [2024-11-26 17:54:19.749397] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.117 [2024-11-26 17:54:19.749531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.117 17:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67061 00:09:38.117 [2024-11-26 17:54:19.749614] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.117 [2024-11-26 17:54:19.749629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:38.375 [2024-11-26 17:54:20.107803] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:39.754 17:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:39.754 00:09:39.754 real 0m6.020s 00:09:39.754 user 0m8.682s 00:09:39.754 sys 0m0.867s 00:09:39.754 17:54:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.754 17:54:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.754 ************************************ 00:09:39.754 END TEST raid_superblock_test 00:09:39.754 ************************************ 00:09:39.754 17:54:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:39.754 17:54:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:39.754 17:54:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.754 17:54:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:39.754 ************************************ 00:09:39.754 START TEST raid_read_error_test 00:09:39.754 ************************************ 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SLvJ75h6og 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67325 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67325 00:09:39.754 17:54:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67325 ']' 00:09:39.755 17:54:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.755 17:54:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.755 17:54:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.755 17:54:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.755 17:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:39.755 17:54:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.014 [2024-11-26 17:54:21.644472] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:09:40.014 [2024-11-26 17:54:21.644731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67325 ] 00:09:40.014 [2024-11-26 17:54:21.813632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.274 [2024-11-26 17:54:21.967395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.533 [2024-11-26 17:54:22.218533] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.533 [2024-11-26 17:54:22.218677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.791 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.791 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:40.791 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:40.791 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:40.791 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.791 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.050 BaseBdev1_malloc 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.050 true 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.050 [2024-11-26 17:54:22.706247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:41.050 [2024-11-26 17:54:22.706350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.050 [2024-11-26 17:54:22.706383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:41.050 [2024-11-26 17:54:22.706397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.050 [2024-11-26 17:54:22.709335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.050 [2024-11-26 17:54:22.709411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:41.050 BaseBdev1 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.050 BaseBdev2_malloc 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.050 true 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.050 [2024-11-26 17:54:22.770892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:41.050 [2024-11-26 17:54:22.770993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.050 [2024-11-26 17:54:22.771039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:41.050 [2024-11-26 17:54:22.771056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.050 [2024-11-26 17:54:22.773962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.050 [2024-11-26 17:54:22.774049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:41.050 BaseBdev2 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.050 BaseBdev3_malloc 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.050 true 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.050 [2024-11-26 17:54:22.854954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:41.050 [2024-11-26 17:54:22.855073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.050 [2024-11-26 17:54:22.855105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:41.050 [2024-11-26 17:54:22.855118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.050 [2024-11-26 17:54:22.857927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.050 [2024-11-26 17:54:22.858002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:41.050 BaseBdev3 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.050 [2024-11-26 17:54:22.867128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.050 [2024-11-26 17:54:22.869531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.050 [2024-11-26 17:54:22.869678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.050 [2024-11-26 17:54:22.869956] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:41.050 [2024-11-26 17:54:22.869972] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:41.050 [2024-11-26 17:54:22.870353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:41.050 [2024-11-26 17:54:22.870568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:41.050 [2024-11-26 17:54:22.870584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:41.050 [2024-11-26 17:54:22.870808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.050 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.309 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.309 "name": "raid_bdev1", 00:09:41.309 "uuid": "bb995877-d65e-44c4-821d-1d10eaafcbde", 00:09:41.309 "strip_size_kb": 64, 00:09:41.309 "state": "online", 00:09:41.309 "raid_level": "concat", 00:09:41.309 "superblock": true, 00:09:41.309 "num_base_bdevs": 3, 00:09:41.309 "num_base_bdevs_discovered": 3, 00:09:41.309 "num_base_bdevs_operational": 3, 00:09:41.309 "base_bdevs_list": [ 00:09:41.309 { 00:09:41.309 "name": "BaseBdev1", 00:09:41.309 "uuid": "75bbb5ab-74ce-584e-89c2-225dadbb9a74", 00:09:41.309 "is_configured": true, 00:09:41.309 "data_offset": 2048, 00:09:41.309 "data_size": 63488 00:09:41.309 }, 00:09:41.309 { 00:09:41.309 "name": "BaseBdev2", 00:09:41.309 "uuid": "77f01796-8e97-5a17-8041-a8be58119601", 00:09:41.309 "is_configured": true, 00:09:41.309 "data_offset": 2048, 00:09:41.309 "data_size": 63488 00:09:41.309 }, 00:09:41.309 { 00:09:41.309 "name": "BaseBdev3", 00:09:41.309 "uuid": "db6a63d6-1555-5209-900d-d55d22a43c12", 00:09:41.309 "is_configured": true, 00:09:41.309 "data_offset": 2048, 00:09:41.309 "data_size": 63488 00:09:41.309 } 00:09:41.309 ] 00:09:41.309 }' 00:09:41.309 17:54:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.309 17:54:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.567 17:54:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:41.567 17:54:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:41.567 [2024-11-26 17:54:23.415640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.500 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.758 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.758 "name": "raid_bdev1", 00:09:42.758 "uuid": "bb995877-d65e-44c4-821d-1d10eaafcbde", 00:09:42.758 "strip_size_kb": 64, 00:09:42.758 "state": "online", 00:09:42.758 "raid_level": "concat", 00:09:42.758 "superblock": true, 00:09:42.758 "num_base_bdevs": 3, 00:09:42.758 "num_base_bdevs_discovered": 3, 00:09:42.758 "num_base_bdevs_operational": 3, 00:09:42.758 "base_bdevs_list": [ 00:09:42.758 { 00:09:42.758 "name": "BaseBdev1", 00:09:42.758 "uuid": "75bbb5ab-74ce-584e-89c2-225dadbb9a74", 00:09:42.758 "is_configured": true, 00:09:42.758 "data_offset": 2048, 00:09:42.758 "data_size": 63488 00:09:42.758 }, 00:09:42.758 { 00:09:42.758 "name": "BaseBdev2", 00:09:42.758 "uuid": "77f01796-8e97-5a17-8041-a8be58119601", 00:09:42.758 "is_configured": true, 00:09:42.758 "data_offset": 2048, 00:09:42.758 "data_size": 63488 00:09:42.758 }, 00:09:42.758 { 00:09:42.758 "name": "BaseBdev3", 00:09:42.758 "uuid": "db6a63d6-1555-5209-900d-d55d22a43c12", 00:09:42.758 "is_configured": true, 00:09:42.758 "data_offset": 2048, 00:09:42.758 "data_size": 63488 00:09:42.758 } 00:09:42.758 ] 00:09:42.758 }' 00:09:42.758 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.758 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.017 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:43.017 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.017 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.017 [2024-11-26 17:54:24.777357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.017 [2024-11-26 17:54:24.777399] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.017 [2024-11-26 17:54:24.780892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.017 [2024-11-26 17:54:24.781004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.017 [2024-11-26 17:54:24.781089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.017 [2024-11-26 17:54:24.781147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:43.017 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.017 17:54:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67325 00:09:43.017 { 00:09:43.017 "results": [ 00:09:43.017 { 00:09:43.017 "job": "raid_bdev1", 00:09:43.017 "core_mask": "0x1", 00:09:43.017 "workload": "randrw", 00:09:43.017 "percentage": 50, 00:09:43.017 "status": "finished", 00:09:43.017 "queue_depth": 1, 00:09:43.017 "io_size": 131072, 00:09:43.017 "runtime": 1.36194, 00:09:43.017 "iops": 12752.397315593933, 00:09:43.017 "mibps": 1594.0496644492416, 00:09:43.017 "io_failed": 1, 00:09:43.017 "io_timeout": 0, 00:09:43.017 "avg_latency_us": 108.72247433753003, 00:09:43.017 "min_latency_us": 32.419213973799124, 00:09:43.017 "max_latency_us": 1888.810480349345 00:09:43.017 } 00:09:43.017 ], 00:09:43.017 "core_count": 1 00:09:43.018 } 00:09:43.018 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67325 ']' 00:09:43.018 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67325 00:09:43.018 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:43.018 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.018 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67325 00:09:43.018 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.018 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.018 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67325' 00:09:43.018 killing process with pid 67325 00:09:43.018 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67325 00:09:43.018 17:54:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67325 00:09:43.018 [2024-11-26 17:54:24.814911] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:43.276 [2024-11-26 17:54:25.098769] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.653 17:54:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SLvJ75h6og 00:09:44.653 17:54:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:44.653 17:54:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:44.653 17:54:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:44.653 17:54:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:44.653 ************************************ 00:09:44.653 END TEST raid_read_error_test 00:09:44.653 ************************************ 00:09:44.653 17:54:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.653 17:54:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:44.653 17:54:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:44.653 00:09:44.653 real 0m4.960s 00:09:44.653 user 0m5.944s 00:09:44.653 sys 0m0.575s 00:09:44.653 17:54:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.653 17:54:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.935 17:54:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:44.935 17:54:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:44.935 17:54:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.935 17:54:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.935 ************************************ 00:09:44.935 START TEST raid_write_error_test 00:09:44.935 ************************************ 00:09:44.935 17:54:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:44.935 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:44.935 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:44.935 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:44.935 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Xn5A1gSlVD 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67471 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67471 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67471 ']' 00:09:44.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.936 17:54:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.936 [2024-11-26 17:54:26.666644] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:09:44.936 [2024-11-26 17:54:26.666786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67471 ] 00:09:45.197 [2024-11-26 17:54:26.828169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.197 [2024-11-26 17:54:26.967330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.456 [2024-11-26 17:54:27.213636] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.456 [2024-11-26 17:54:27.213698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.024 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.025 BaseBdev1_malloc 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.025 true 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.025 [2024-11-26 17:54:27.646798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:46.025 [2024-11-26 17:54:27.647028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.025 [2024-11-26 17:54:27.647083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:46.025 [2024-11-26 17:54:27.647099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.025 [2024-11-26 17:54:27.649795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.025 [2024-11-26 17:54:27.649857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:46.025 BaseBdev1 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.025 BaseBdev2_malloc 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.025 true 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.025 [2024-11-26 17:54:27.721563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:46.025 [2024-11-26 17:54:27.721785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.025 [2024-11-26 17:54:27.721817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:46.025 [2024-11-26 17:54:27.721830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.025 [2024-11-26 17:54:27.724458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.025 [2024-11-26 17:54:27.724516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:46.025 BaseBdev2 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.025 BaseBdev3_malloc 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.025 true 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.025 [2024-11-26 17:54:27.802673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:46.025 [2024-11-26 17:54:27.802776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.025 [2024-11-26 17:54:27.802805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:46.025 [2024-11-26 17:54:27.802818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.025 [2024-11-26 17:54:27.805467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.025 [2024-11-26 17:54:27.805536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:46.025 BaseBdev3 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.025 [2024-11-26 17:54:27.814770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.025 [2024-11-26 17:54:27.817061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.025 [2024-11-26 17:54:27.817168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.025 [2024-11-26 17:54:27.817434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:46.025 [2024-11-26 17:54:27.817450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:46.025 [2024-11-26 17:54:27.817805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:46.025 [2024-11-26 17:54:27.818043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:46.025 [2024-11-26 17:54:27.818061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:46.025 [2024-11-26 17:54:27.818293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.025 "name": "raid_bdev1", 00:09:46.025 "uuid": "f4042907-96fb-4716-af39-aefb30f011dc", 00:09:46.025 "strip_size_kb": 64, 00:09:46.025 "state": "online", 00:09:46.025 "raid_level": "concat", 00:09:46.025 "superblock": true, 00:09:46.025 "num_base_bdevs": 3, 00:09:46.025 "num_base_bdevs_discovered": 3, 00:09:46.025 "num_base_bdevs_operational": 3, 00:09:46.025 "base_bdevs_list": [ 00:09:46.025 { 00:09:46.025 "name": "BaseBdev1", 00:09:46.025 "uuid": "808f4c47-7cec-57cd-962d-ac3bc2982fb9", 00:09:46.025 "is_configured": true, 00:09:46.025 "data_offset": 2048, 00:09:46.025 "data_size": 63488 00:09:46.025 }, 00:09:46.025 { 00:09:46.025 "name": "BaseBdev2", 00:09:46.025 "uuid": "9a9f9a79-bfa3-5f6a-afcc-e9020455ad6d", 00:09:46.025 "is_configured": true, 00:09:46.025 "data_offset": 2048, 00:09:46.025 "data_size": 63488 00:09:46.025 }, 00:09:46.025 { 00:09:46.025 "name": "BaseBdev3", 00:09:46.025 "uuid": "13d787f4-a0a2-5edf-be97-9a1cbfb9cfbe", 00:09:46.025 "is_configured": true, 00:09:46.025 "data_offset": 2048, 00:09:46.025 "data_size": 63488 00:09:46.025 } 00:09:46.025 ] 00:09:46.025 }' 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.025 17:54:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.595 17:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:46.595 17:54:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:46.595 [2024-11-26 17:54:28.419265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.535 "name": "raid_bdev1", 00:09:47.535 "uuid": "f4042907-96fb-4716-af39-aefb30f011dc", 00:09:47.535 "strip_size_kb": 64, 00:09:47.535 "state": "online", 00:09:47.535 "raid_level": "concat", 00:09:47.535 "superblock": true, 00:09:47.535 "num_base_bdevs": 3, 00:09:47.535 "num_base_bdevs_discovered": 3, 00:09:47.535 "num_base_bdevs_operational": 3, 00:09:47.535 "base_bdevs_list": [ 00:09:47.535 { 00:09:47.535 "name": "BaseBdev1", 00:09:47.535 "uuid": "808f4c47-7cec-57cd-962d-ac3bc2982fb9", 00:09:47.535 "is_configured": true, 00:09:47.535 "data_offset": 2048, 00:09:47.535 "data_size": 63488 00:09:47.535 }, 00:09:47.535 { 00:09:47.535 "name": "BaseBdev2", 00:09:47.535 "uuid": "9a9f9a79-bfa3-5f6a-afcc-e9020455ad6d", 00:09:47.535 "is_configured": true, 00:09:47.535 "data_offset": 2048, 00:09:47.535 "data_size": 63488 00:09:47.535 }, 00:09:47.535 { 00:09:47.535 "name": "BaseBdev3", 00:09:47.535 "uuid": "13d787f4-a0a2-5edf-be97-9a1cbfb9cfbe", 00:09:47.535 "is_configured": true, 00:09:47.535 "data_offset": 2048, 00:09:47.535 "data_size": 63488 00:09:47.535 } 00:09:47.535 ] 00:09:47.535 }' 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.535 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.105 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.105 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.105 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.105 [2024-11-26 17:54:29.817601] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.105 [2024-11-26 17:54:29.817752] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.105 [2024-11-26 17:54:29.821368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.105 [2024-11-26 17:54:29.821539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.105 [2024-11-26 17:54:29.821624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.105 [2024-11-26 17:54:29.821681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:48.105 { 00:09:48.105 "results": [ 00:09:48.105 { 00:09:48.105 "job": "raid_bdev1", 00:09:48.105 "core_mask": "0x1", 00:09:48.105 "workload": "randrw", 00:09:48.105 "percentage": 50, 00:09:48.105 "status": "finished", 00:09:48.105 "queue_depth": 1, 00:09:48.105 "io_size": 131072, 00:09:48.105 "runtime": 1.398962, 00:09:48.105 "iops": 12535.00809886187, 00:09:48.105 "mibps": 1566.8760123577338, 00:09:48.105 "io_failed": 1, 00:09:48.105 "io_timeout": 0, 00:09:48.105 "avg_latency_us": 110.64368888934264, 00:09:48.105 "min_latency_us": 33.760698689956335, 00:09:48.105 "max_latency_us": 1781.4917030567685 00:09:48.105 } 00:09:48.105 ], 00:09:48.105 "core_count": 1 00:09:48.105 } 00:09:48.105 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.105 17:54:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67471 00:09:48.105 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67471 ']' 00:09:48.105 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67471 00:09:48.105 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:48.105 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.105 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67471 00:09:48.105 killing process with pid 67471 00:09:48.105 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.105 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.105 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67471' 00:09:48.105 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67471 00:09:48.105 [2024-11-26 17:54:29.860086] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.105 17:54:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67471 00:09:48.365 [2024-11-26 17:54:30.144447] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.773 17:54:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:49.773 17:54:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Xn5A1gSlVD 00:09:49.773 17:54:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:49.773 17:54:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:49.773 17:54:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:49.773 ************************************ 00:09:49.773 END TEST raid_write_error_test 00:09:49.773 ************************************ 00:09:49.773 17:54:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.773 17:54:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:49.773 17:54:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:49.773 00:09:49.773 real 0m5.054s 00:09:49.773 user 0m6.072s 00:09:49.773 sys 0m0.575s 00:09:49.773 17:54:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.773 17:54:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.032 17:54:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:50.032 17:54:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:50.032 17:54:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:50.032 17:54:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.032 17:54:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:50.032 ************************************ 00:09:50.032 START TEST raid_state_function_test 00:09:50.032 ************************************ 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:50.032 Process raid pid: 67620 00:09:50.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67620 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67620' 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67620 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67620 ']' 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.032 17:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:50.033 17:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.033 17:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.033 [2024-11-26 17:54:31.785467] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:09:50.033 [2024-11-26 17:54:31.785744] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.291 [2024-11-26 17:54:31.954886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.291 [2024-11-26 17:54:32.105692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.549 [2024-11-26 17:54:32.369511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.549 [2024-11-26 17:54:32.369707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.116 17:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.117 [2024-11-26 17:54:32.790624] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:51.117 [2024-11-26 17:54:32.790784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:51.117 [2024-11-26 17:54:32.790832] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:51.117 [2024-11-26 17:54:32.790879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:51.117 [2024-11-26 17:54:32.790913] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:51.117 [2024-11-26 17:54:32.790953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.117 "name": "Existed_Raid", 00:09:51.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.117 "strip_size_kb": 0, 00:09:51.117 "state": "configuring", 00:09:51.117 "raid_level": "raid1", 00:09:51.117 "superblock": false, 00:09:51.117 "num_base_bdevs": 3, 00:09:51.117 "num_base_bdevs_discovered": 0, 00:09:51.117 "num_base_bdevs_operational": 3, 00:09:51.117 "base_bdevs_list": [ 00:09:51.117 { 00:09:51.117 "name": "BaseBdev1", 00:09:51.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.117 "is_configured": false, 00:09:51.117 "data_offset": 0, 00:09:51.117 "data_size": 0 00:09:51.117 }, 00:09:51.117 { 00:09:51.117 "name": "BaseBdev2", 00:09:51.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.117 "is_configured": false, 00:09:51.117 "data_offset": 0, 00:09:51.117 "data_size": 0 00:09:51.117 }, 00:09:51.117 { 00:09:51.117 "name": "BaseBdev3", 00:09:51.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.117 "is_configured": false, 00:09:51.117 "data_offset": 0, 00:09:51.117 "data_size": 0 00:09:51.117 } 00:09:51.117 ] 00:09:51.117 }' 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.117 17:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.375 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.375 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.375 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.375 [2024-11-26 17:54:33.209879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.375 [2024-11-26 17:54:33.210032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:51.375 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.375 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:51.375 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.375 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.376 [2024-11-26 17:54:33.221872] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:51.376 [2024-11-26 17:54:33.222064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:51.376 [2024-11-26 17:54:33.222085] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:51.376 [2024-11-26 17:54:33.222098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:51.376 [2024-11-26 17:54:33.222107] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:51.376 [2024-11-26 17:54:33.222118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:51.376 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.376 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:51.376 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.376 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.635 [2024-11-26 17:54:33.279047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.635 BaseBdev1 00:09:51.635 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.635 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:51.635 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:51.635 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.635 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:51.635 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.635 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.635 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.635 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.635 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.635 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.635 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:51.635 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.635 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.635 [ 00:09:51.635 { 00:09:51.635 "name": "BaseBdev1", 00:09:51.635 "aliases": [ 00:09:51.635 "859845ab-675c-4380-a2d1-64fc54ecf00f" 00:09:51.636 ], 00:09:51.636 "product_name": "Malloc disk", 00:09:51.636 "block_size": 512, 00:09:51.636 "num_blocks": 65536, 00:09:51.636 "uuid": "859845ab-675c-4380-a2d1-64fc54ecf00f", 00:09:51.636 "assigned_rate_limits": { 00:09:51.636 "rw_ios_per_sec": 0, 00:09:51.636 "rw_mbytes_per_sec": 0, 00:09:51.636 "r_mbytes_per_sec": 0, 00:09:51.636 "w_mbytes_per_sec": 0 00:09:51.636 }, 00:09:51.636 "claimed": true, 00:09:51.636 "claim_type": "exclusive_write", 00:09:51.636 "zoned": false, 00:09:51.636 "supported_io_types": { 00:09:51.636 "read": true, 00:09:51.636 "write": true, 00:09:51.636 "unmap": true, 00:09:51.636 "flush": true, 00:09:51.636 "reset": true, 00:09:51.636 "nvme_admin": false, 00:09:51.636 "nvme_io": false, 00:09:51.636 "nvme_io_md": false, 00:09:51.636 "write_zeroes": true, 00:09:51.636 "zcopy": true, 00:09:51.636 "get_zone_info": false, 00:09:51.636 "zone_management": false, 00:09:51.636 "zone_append": false, 00:09:51.636 "compare": false, 00:09:51.636 "compare_and_write": false, 00:09:51.636 "abort": true, 00:09:51.636 "seek_hole": false, 00:09:51.636 "seek_data": false, 00:09:51.636 "copy": true, 00:09:51.636 "nvme_iov_md": false 00:09:51.636 }, 00:09:51.636 "memory_domains": [ 00:09:51.636 { 00:09:51.636 "dma_device_id": "system", 00:09:51.636 "dma_device_type": 1 00:09:51.636 }, 00:09:51.636 { 00:09:51.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.636 "dma_device_type": 2 00:09:51.636 } 00:09:51.636 ], 00:09:51.636 "driver_specific": {} 00:09:51.636 } 00:09:51.636 ] 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.636 "name": "Existed_Raid", 00:09:51.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.636 "strip_size_kb": 0, 00:09:51.636 "state": "configuring", 00:09:51.636 "raid_level": "raid1", 00:09:51.636 "superblock": false, 00:09:51.636 "num_base_bdevs": 3, 00:09:51.636 "num_base_bdevs_discovered": 1, 00:09:51.636 "num_base_bdevs_operational": 3, 00:09:51.636 "base_bdevs_list": [ 00:09:51.636 { 00:09:51.636 "name": "BaseBdev1", 00:09:51.636 "uuid": "859845ab-675c-4380-a2d1-64fc54ecf00f", 00:09:51.636 "is_configured": true, 00:09:51.636 "data_offset": 0, 00:09:51.636 "data_size": 65536 00:09:51.636 }, 00:09:51.636 { 00:09:51.636 "name": "BaseBdev2", 00:09:51.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.636 "is_configured": false, 00:09:51.636 "data_offset": 0, 00:09:51.636 "data_size": 0 00:09:51.636 }, 00:09:51.636 { 00:09:51.636 "name": "BaseBdev3", 00:09:51.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.636 "is_configured": false, 00:09:51.636 "data_offset": 0, 00:09:51.636 "data_size": 0 00:09:51.636 } 00:09:51.636 ] 00:09:51.636 }' 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.636 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.205 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:52.205 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.205 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.205 [2024-11-26 17:54:33.790341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:52.205 [2024-11-26 17:54:33.790553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:52.205 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.205 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:52.205 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.205 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.206 [2024-11-26 17:54:33.798426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.206 [2024-11-26 17:54:33.801087] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.206 [2024-11-26 17:54:33.801262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.206 [2024-11-26 17:54:33.801311] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:52.206 [2024-11-26 17:54:33.801358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.206 "name": "Existed_Raid", 00:09:52.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.206 "strip_size_kb": 0, 00:09:52.206 "state": "configuring", 00:09:52.206 "raid_level": "raid1", 00:09:52.206 "superblock": false, 00:09:52.206 "num_base_bdevs": 3, 00:09:52.206 "num_base_bdevs_discovered": 1, 00:09:52.206 "num_base_bdevs_operational": 3, 00:09:52.206 "base_bdevs_list": [ 00:09:52.206 { 00:09:52.206 "name": "BaseBdev1", 00:09:52.206 "uuid": "859845ab-675c-4380-a2d1-64fc54ecf00f", 00:09:52.206 "is_configured": true, 00:09:52.206 "data_offset": 0, 00:09:52.206 "data_size": 65536 00:09:52.206 }, 00:09:52.206 { 00:09:52.206 "name": "BaseBdev2", 00:09:52.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.206 "is_configured": false, 00:09:52.206 "data_offset": 0, 00:09:52.206 "data_size": 0 00:09:52.206 }, 00:09:52.206 { 00:09:52.206 "name": "BaseBdev3", 00:09:52.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.206 "is_configured": false, 00:09:52.206 "data_offset": 0, 00:09:52.206 "data_size": 0 00:09:52.206 } 00:09:52.206 ] 00:09:52.206 }' 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.206 17:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.466 [2024-11-26 17:54:34.313777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.466 BaseBdev2 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.466 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.725 [ 00:09:52.725 { 00:09:52.725 "name": "BaseBdev2", 00:09:52.725 "aliases": [ 00:09:52.725 "893ed3f2-3dcd-41a5-98e1-fa22a69ed7b2" 00:09:52.725 ], 00:09:52.725 "product_name": "Malloc disk", 00:09:52.725 "block_size": 512, 00:09:52.725 "num_blocks": 65536, 00:09:52.725 "uuid": "893ed3f2-3dcd-41a5-98e1-fa22a69ed7b2", 00:09:52.725 "assigned_rate_limits": { 00:09:52.725 "rw_ios_per_sec": 0, 00:09:52.725 "rw_mbytes_per_sec": 0, 00:09:52.725 "r_mbytes_per_sec": 0, 00:09:52.725 "w_mbytes_per_sec": 0 00:09:52.725 }, 00:09:52.725 "claimed": true, 00:09:52.725 "claim_type": "exclusive_write", 00:09:52.725 "zoned": false, 00:09:52.725 "supported_io_types": { 00:09:52.725 "read": true, 00:09:52.725 "write": true, 00:09:52.725 "unmap": true, 00:09:52.725 "flush": true, 00:09:52.725 "reset": true, 00:09:52.725 "nvme_admin": false, 00:09:52.725 "nvme_io": false, 00:09:52.725 "nvme_io_md": false, 00:09:52.725 "write_zeroes": true, 00:09:52.725 "zcopy": true, 00:09:52.725 "get_zone_info": false, 00:09:52.725 "zone_management": false, 00:09:52.725 "zone_append": false, 00:09:52.725 "compare": false, 00:09:52.725 "compare_and_write": false, 00:09:52.725 "abort": true, 00:09:52.725 "seek_hole": false, 00:09:52.725 "seek_data": false, 00:09:52.725 "copy": true, 00:09:52.725 "nvme_iov_md": false 00:09:52.725 }, 00:09:52.725 "memory_domains": [ 00:09:52.725 { 00:09:52.725 "dma_device_id": "system", 00:09:52.725 "dma_device_type": 1 00:09:52.725 }, 00:09:52.725 { 00:09:52.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.725 "dma_device_type": 2 00:09:52.725 } 00:09:52.725 ], 00:09:52.725 "driver_specific": {} 00:09:52.725 } 00:09:52.725 ] 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.725 "name": "Existed_Raid", 00:09:52.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.725 "strip_size_kb": 0, 00:09:52.725 "state": "configuring", 00:09:52.725 "raid_level": "raid1", 00:09:52.725 "superblock": false, 00:09:52.725 "num_base_bdevs": 3, 00:09:52.725 "num_base_bdevs_discovered": 2, 00:09:52.725 "num_base_bdevs_operational": 3, 00:09:52.725 "base_bdevs_list": [ 00:09:52.725 { 00:09:52.725 "name": "BaseBdev1", 00:09:52.725 "uuid": "859845ab-675c-4380-a2d1-64fc54ecf00f", 00:09:52.725 "is_configured": true, 00:09:52.725 "data_offset": 0, 00:09:52.725 "data_size": 65536 00:09:52.725 }, 00:09:52.725 { 00:09:52.725 "name": "BaseBdev2", 00:09:52.725 "uuid": "893ed3f2-3dcd-41a5-98e1-fa22a69ed7b2", 00:09:52.725 "is_configured": true, 00:09:52.725 "data_offset": 0, 00:09:52.725 "data_size": 65536 00:09:52.725 }, 00:09:52.725 { 00:09:52.725 "name": "BaseBdev3", 00:09:52.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.725 "is_configured": false, 00:09:52.725 "data_offset": 0, 00:09:52.725 "data_size": 0 00:09:52.725 } 00:09:52.725 ] 00:09:52.725 }' 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.725 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.983 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:52.983 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.983 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.310 [2024-11-26 17:54:34.853118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.310 [2024-11-26 17:54:34.853298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:53.310 [2024-11-26 17:54:34.853338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:53.310 [2024-11-26 17:54:34.853736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:53.310 [2024-11-26 17:54:34.854066] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:53.310 [2024-11-26 17:54:34.854136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:53.310 [2024-11-26 17:54:34.854555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.310 BaseBdev3 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.310 [ 00:09:53.310 { 00:09:53.310 "name": "BaseBdev3", 00:09:53.310 "aliases": [ 00:09:53.310 "af8ad391-0dfd-4ea9-940a-52ac32d34431" 00:09:53.310 ], 00:09:53.310 "product_name": "Malloc disk", 00:09:53.310 "block_size": 512, 00:09:53.310 "num_blocks": 65536, 00:09:53.310 "uuid": "af8ad391-0dfd-4ea9-940a-52ac32d34431", 00:09:53.310 "assigned_rate_limits": { 00:09:53.310 "rw_ios_per_sec": 0, 00:09:53.310 "rw_mbytes_per_sec": 0, 00:09:53.310 "r_mbytes_per_sec": 0, 00:09:53.310 "w_mbytes_per_sec": 0 00:09:53.310 }, 00:09:53.310 "claimed": true, 00:09:53.310 "claim_type": "exclusive_write", 00:09:53.310 "zoned": false, 00:09:53.310 "supported_io_types": { 00:09:53.310 "read": true, 00:09:53.310 "write": true, 00:09:53.310 "unmap": true, 00:09:53.310 "flush": true, 00:09:53.310 "reset": true, 00:09:53.310 "nvme_admin": false, 00:09:53.310 "nvme_io": false, 00:09:53.310 "nvme_io_md": false, 00:09:53.310 "write_zeroes": true, 00:09:53.310 "zcopy": true, 00:09:53.310 "get_zone_info": false, 00:09:53.310 "zone_management": false, 00:09:53.310 "zone_append": false, 00:09:53.310 "compare": false, 00:09:53.310 "compare_and_write": false, 00:09:53.310 "abort": true, 00:09:53.310 "seek_hole": false, 00:09:53.310 "seek_data": false, 00:09:53.310 "copy": true, 00:09:53.310 "nvme_iov_md": false 00:09:53.310 }, 00:09:53.310 "memory_domains": [ 00:09:53.310 { 00:09:53.310 "dma_device_id": "system", 00:09:53.310 "dma_device_type": 1 00:09:53.310 }, 00:09:53.310 { 00:09:53.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.310 "dma_device_type": 2 00:09:53.310 } 00:09:53.310 ], 00:09:53.310 "driver_specific": {} 00:09:53.310 } 00:09:53.310 ] 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.310 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.310 "name": "Existed_Raid", 00:09:53.310 "uuid": "fab78b57-0dde-40f8-a2b2-59bdb9661173", 00:09:53.310 "strip_size_kb": 0, 00:09:53.310 "state": "online", 00:09:53.310 "raid_level": "raid1", 00:09:53.310 "superblock": false, 00:09:53.310 "num_base_bdevs": 3, 00:09:53.310 "num_base_bdevs_discovered": 3, 00:09:53.310 "num_base_bdevs_operational": 3, 00:09:53.310 "base_bdevs_list": [ 00:09:53.310 { 00:09:53.310 "name": "BaseBdev1", 00:09:53.310 "uuid": "859845ab-675c-4380-a2d1-64fc54ecf00f", 00:09:53.310 "is_configured": true, 00:09:53.310 "data_offset": 0, 00:09:53.310 "data_size": 65536 00:09:53.310 }, 00:09:53.310 { 00:09:53.310 "name": "BaseBdev2", 00:09:53.310 "uuid": "893ed3f2-3dcd-41a5-98e1-fa22a69ed7b2", 00:09:53.311 "is_configured": true, 00:09:53.311 "data_offset": 0, 00:09:53.311 "data_size": 65536 00:09:53.311 }, 00:09:53.311 { 00:09:53.311 "name": "BaseBdev3", 00:09:53.311 "uuid": "af8ad391-0dfd-4ea9-940a-52ac32d34431", 00:09:53.311 "is_configured": true, 00:09:53.311 "data_offset": 0, 00:09:53.311 "data_size": 65536 00:09:53.311 } 00:09:53.311 ] 00:09:53.311 }' 00:09:53.311 17:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.311 17:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.569 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:53.569 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:53.569 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.569 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.569 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.569 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.569 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:53.569 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.569 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.569 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.569 [2024-11-26 17:54:35.409569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.569 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.827 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.827 "name": "Existed_Raid", 00:09:53.827 "aliases": [ 00:09:53.827 "fab78b57-0dde-40f8-a2b2-59bdb9661173" 00:09:53.827 ], 00:09:53.827 "product_name": "Raid Volume", 00:09:53.827 "block_size": 512, 00:09:53.827 "num_blocks": 65536, 00:09:53.827 "uuid": "fab78b57-0dde-40f8-a2b2-59bdb9661173", 00:09:53.827 "assigned_rate_limits": { 00:09:53.827 "rw_ios_per_sec": 0, 00:09:53.827 "rw_mbytes_per_sec": 0, 00:09:53.827 "r_mbytes_per_sec": 0, 00:09:53.827 "w_mbytes_per_sec": 0 00:09:53.827 }, 00:09:53.827 "claimed": false, 00:09:53.827 "zoned": false, 00:09:53.827 "supported_io_types": { 00:09:53.827 "read": true, 00:09:53.827 "write": true, 00:09:53.827 "unmap": false, 00:09:53.827 "flush": false, 00:09:53.827 "reset": true, 00:09:53.827 "nvme_admin": false, 00:09:53.827 "nvme_io": false, 00:09:53.827 "nvme_io_md": false, 00:09:53.827 "write_zeroes": true, 00:09:53.827 "zcopy": false, 00:09:53.827 "get_zone_info": false, 00:09:53.827 "zone_management": false, 00:09:53.827 "zone_append": false, 00:09:53.827 "compare": false, 00:09:53.827 "compare_and_write": false, 00:09:53.827 "abort": false, 00:09:53.827 "seek_hole": false, 00:09:53.827 "seek_data": false, 00:09:53.827 "copy": false, 00:09:53.827 "nvme_iov_md": false 00:09:53.827 }, 00:09:53.827 "memory_domains": [ 00:09:53.827 { 00:09:53.827 "dma_device_id": "system", 00:09:53.827 "dma_device_type": 1 00:09:53.827 }, 00:09:53.827 { 00:09:53.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.827 "dma_device_type": 2 00:09:53.827 }, 00:09:53.827 { 00:09:53.827 "dma_device_id": "system", 00:09:53.827 "dma_device_type": 1 00:09:53.827 }, 00:09:53.827 { 00:09:53.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.827 "dma_device_type": 2 00:09:53.827 }, 00:09:53.827 { 00:09:53.827 "dma_device_id": "system", 00:09:53.827 "dma_device_type": 1 00:09:53.827 }, 00:09:53.827 { 00:09:53.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.827 "dma_device_type": 2 00:09:53.827 } 00:09:53.827 ], 00:09:53.827 "driver_specific": { 00:09:53.827 "raid": { 00:09:53.827 "uuid": "fab78b57-0dde-40f8-a2b2-59bdb9661173", 00:09:53.827 "strip_size_kb": 0, 00:09:53.827 "state": "online", 00:09:53.827 "raid_level": "raid1", 00:09:53.827 "superblock": false, 00:09:53.827 "num_base_bdevs": 3, 00:09:53.827 "num_base_bdevs_discovered": 3, 00:09:53.827 "num_base_bdevs_operational": 3, 00:09:53.827 "base_bdevs_list": [ 00:09:53.827 { 00:09:53.827 "name": "BaseBdev1", 00:09:53.827 "uuid": "859845ab-675c-4380-a2d1-64fc54ecf00f", 00:09:53.827 "is_configured": true, 00:09:53.827 "data_offset": 0, 00:09:53.827 "data_size": 65536 00:09:53.827 }, 00:09:53.827 { 00:09:53.828 "name": "BaseBdev2", 00:09:53.828 "uuid": "893ed3f2-3dcd-41a5-98e1-fa22a69ed7b2", 00:09:53.828 "is_configured": true, 00:09:53.828 "data_offset": 0, 00:09:53.828 "data_size": 65536 00:09:53.828 }, 00:09:53.828 { 00:09:53.828 "name": "BaseBdev3", 00:09:53.828 "uuid": "af8ad391-0dfd-4ea9-940a-52ac32d34431", 00:09:53.828 "is_configured": true, 00:09:53.828 "data_offset": 0, 00:09:53.828 "data_size": 65536 00:09:53.828 } 00:09:53.828 ] 00:09:53.828 } 00:09:53.828 } 00:09:53.828 }' 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:53.828 BaseBdev2 00:09:53.828 BaseBdev3' 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.828 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.828 [2024-11-26 17:54:35.685150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:54.086 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.086 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:54.086 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:54.086 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:54.086 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:54.086 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:54.086 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:54.086 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.086 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.086 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.086 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.086 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.087 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.087 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.087 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.087 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.087 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.087 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.087 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.087 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.087 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.087 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.087 "name": "Existed_Raid", 00:09:54.087 "uuid": "fab78b57-0dde-40f8-a2b2-59bdb9661173", 00:09:54.087 "strip_size_kb": 0, 00:09:54.087 "state": "online", 00:09:54.087 "raid_level": "raid1", 00:09:54.087 "superblock": false, 00:09:54.087 "num_base_bdevs": 3, 00:09:54.087 "num_base_bdevs_discovered": 2, 00:09:54.087 "num_base_bdevs_operational": 2, 00:09:54.087 "base_bdevs_list": [ 00:09:54.087 { 00:09:54.087 "name": null, 00:09:54.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.087 "is_configured": false, 00:09:54.087 "data_offset": 0, 00:09:54.087 "data_size": 65536 00:09:54.087 }, 00:09:54.087 { 00:09:54.087 "name": "BaseBdev2", 00:09:54.087 "uuid": "893ed3f2-3dcd-41a5-98e1-fa22a69ed7b2", 00:09:54.087 "is_configured": true, 00:09:54.087 "data_offset": 0, 00:09:54.087 "data_size": 65536 00:09:54.087 }, 00:09:54.087 { 00:09:54.087 "name": "BaseBdev3", 00:09:54.087 "uuid": "af8ad391-0dfd-4ea9-940a-52ac32d34431", 00:09:54.087 "is_configured": true, 00:09:54.087 "data_offset": 0, 00:09:54.087 "data_size": 65536 00:09:54.087 } 00:09:54.087 ] 00:09:54.087 }' 00:09:54.087 17:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.087 17:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.654 [2024-11-26 17:54:36.289667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.654 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.654 [2024-11-26 17:54:36.468632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:54.654 [2024-11-26 17:54:36.468771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.915 [2024-11-26 17:54:36.585851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.915 [2024-11-26 17:54:36.585927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.915 [2024-11-26 17:54:36.585942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.915 BaseBdev2 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.915 [ 00:09:54.915 { 00:09:54.915 "name": "BaseBdev2", 00:09:54.915 "aliases": [ 00:09:54.915 "a7ad386e-24e1-4f52-8646-b59c4fc308a2" 00:09:54.915 ], 00:09:54.915 "product_name": "Malloc disk", 00:09:54.915 "block_size": 512, 00:09:54.915 "num_blocks": 65536, 00:09:54.915 "uuid": "a7ad386e-24e1-4f52-8646-b59c4fc308a2", 00:09:54.915 "assigned_rate_limits": { 00:09:54.915 "rw_ios_per_sec": 0, 00:09:54.915 "rw_mbytes_per_sec": 0, 00:09:54.915 "r_mbytes_per_sec": 0, 00:09:54.915 "w_mbytes_per_sec": 0 00:09:54.915 }, 00:09:54.915 "claimed": false, 00:09:54.915 "zoned": false, 00:09:54.915 "supported_io_types": { 00:09:54.915 "read": true, 00:09:54.915 "write": true, 00:09:54.915 "unmap": true, 00:09:54.915 "flush": true, 00:09:54.915 "reset": true, 00:09:54.915 "nvme_admin": false, 00:09:54.915 "nvme_io": false, 00:09:54.915 "nvme_io_md": false, 00:09:54.915 "write_zeroes": true, 00:09:54.915 "zcopy": true, 00:09:54.915 "get_zone_info": false, 00:09:54.915 "zone_management": false, 00:09:54.915 "zone_append": false, 00:09:54.915 "compare": false, 00:09:54.915 "compare_and_write": false, 00:09:54.915 "abort": true, 00:09:54.915 "seek_hole": false, 00:09:54.915 "seek_data": false, 00:09:54.915 "copy": true, 00:09:54.915 "nvme_iov_md": false 00:09:54.915 }, 00:09:54.915 "memory_domains": [ 00:09:54.915 { 00:09:54.915 "dma_device_id": "system", 00:09:54.915 "dma_device_type": 1 00:09:54.915 }, 00:09:54.915 { 00:09:54.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.915 "dma_device_type": 2 00:09:54.915 } 00:09:54.915 ], 00:09:54.915 "driver_specific": {} 00:09:54.915 } 00:09:54.915 ] 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.915 BaseBdev3 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.915 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:54.916 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.916 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.916 [ 00:09:54.916 { 00:09:54.916 "name": "BaseBdev3", 00:09:54.916 "aliases": [ 00:09:54.916 "c69a4e7c-cce9-4fb1-b4d0-a3b4cf772e2a" 00:09:54.916 ], 00:09:54.916 "product_name": "Malloc disk", 00:09:54.916 "block_size": 512, 00:09:54.916 "num_blocks": 65536, 00:09:54.916 "uuid": "c69a4e7c-cce9-4fb1-b4d0-a3b4cf772e2a", 00:09:54.916 "assigned_rate_limits": { 00:09:54.916 "rw_ios_per_sec": 0, 00:09:54.916 "rw_mbytes_per_sec": 0, 00:09:54.916 "r_mbytes_per_sec": 0, 00:09:54.916 "w_mbytes_per_sec": 0 00:09:54.916 }, 00:09:54.916 "claimed": false, 00:09:54.916 "zoned": false, 00:09:54.916 "supported_io_types": { 00:09:54.916 "read": true, 00:09:54.916 "write": true, 00:09:54.916 "unmap": true, 00:09:54.916 "flush": true, 00:09:54.916 "reset": true, 00:09:54.916 "nvme_admin": false, 00:09:54.916 "nvme_io": false, 00:09:54.916 "nvme_io_md": false, 00:09:54.916 "write_zeroes": true, 00:09:54.916 "zcopy": true, 00:09:54.916 "get_zone_info": false, 00:09:54.916 "zone_management": false, 00:09:54.916 "zone_append": false, 00:09:54.916 "compare": false, 00:09:54.916 "compare_and_write": false, 00:09:54.916 "abort": true, 00:09:54.916 "seek_hole": false, 00:09:54.916 "seek_data": false, 00:09:54.916 "copy": true, 00:09:54.916 "nvme_iov_md": false 00:09:54.916 }, 00:09:54.916 "memory_domains": [ 00:09:54.916 { 00:09:54.916 "dma_device_id": "system", 00:09:54.916 "dma_device_type": 1 00:09:54.916 }, 00:09:54.916 { 00:09:54.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.916 "dma_device_type": 2 00:09:54.916 } 00:09:54.916 ], 00:09:54.916 "driver_specific": {} 00:09:54.916 } 00:09:54.916 ] 00:09:54.916 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.916 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:54.916 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:54.916 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:54.916 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:54.916 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.916 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.916 [2024-11-26 17:54:36.774290] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.916 [2024-11-26 17:54:36.774474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.916 [2024-11-26 17:54:36.774545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.176 [2024-11-26 17:54:36.777048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.176 "name": "Existed_Raid", 00:09:55.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.176 "strip_size_kb": 0, 00:09:55.176 "state": "configuring", 00:09:55.176 "raid_level": "raid1", 00:09:55.176 "superblock": false, 00:09:55.176 "num_base_bdevs": 3, 00:09:55.176 "num_base_bdevs_discovered": 2, 00:09:55.176 "num_base_bdevs_operational": 3, 00:09:55.176 "base_bdevs_list": [ 00:09:55.176 { 00:09:55.176 "name": "BaseBdev1", 00:09:55.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.176 "is_configured": false, 00:09:55.176 "data_offset": 0, 00:09:55.176 "data_size": 0 00:09:55.176 }, 00:09:55.176 { 00:09:55.176 "name": "BaseBdev2", 00:09:55.176 "uuid": "a7ad386e-24e1-4f52-8646-b59c4fc308a2", 00:09:55.176 "is_configured": true, 00:09:55.176 "data_offset": 0, 00:09:55.176 "data_size": 65536 00:09:55.176 }, 00:09:55.176 { 00:09:55.176 "name": "BaseBdev3", 00:09:55.176 "uuid": "c69a4e7c-cce9-4fb1-b4d0-a3b4cf772e2a", 00:09:55.176 "is_configured": true, 00:09:55.176 "data_offset": 0, 00:09:55.176 "data_size": 65536 00:09:55.176 } 00:09:55.176 ] 00:09:55.176 }' 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.176 17:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.435 [2024-11-26 17:54:37.177619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.435 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.436 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.436 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.436 "name": "Existed_Raid", 00:09:55.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.436 "strip_size_kb": 0, 00:09:55.436 "state": "configuring", 00:09:55.436 "raid_level": "raid1", 00:09:55.436 "superblock": false, 00:09:55.436 "num_base_bdevs": 3, 00:09:55.436 "num_base_bdevs_discovered": 1, 00:09:55.436 "num_base_bdevs_operational": 3, 00:09:55.436 "base_bdevs_list": [ 00:09:55.436 { 00:09:55.436 "name": "BaseBdev1", 00:09:55.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.436 "is_configured": false, 00:09:55.436 "data_offset": 0, 00:09:55.436 "data_size": 0 00:09:55.436 }, 00:09:55.436 { 00:09:55.436 "name": null, 00:09:55.436 "uuid": "a7ad386e-24e1-4f52-8646-b59c4fc308a2", 00:09:55.436 "is_configured": false, 00:09:55.436 "data_offset": 0, 00:09:55.436 "data_size": 65536 00:09:55.436 }, 00:09:55.436 { 00:09:55.436 "name": "BaseBdev3", 00:09:55.436 "uuid": "c69a4e7c-cce9-4fb1-b4d0-a3b4cf772e2a", 00:09:55.436 "is_configured": true, 00:09:55.436 "data_offset": 0, 00:09:55.436 "data_size": 65536 00:09:55.436 } 00:09:55.436 ] 00:09:55.436 }' 00:09:55.436 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.436 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.005 [2024-11-26 17:54:37.723726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.005 BaseBdev1 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.005 [ 00:09:56.005 { 00:09:56.005 "name": "BaseBdev1", 00:09:56.005 "aliases": [ 00:09:56.005 "c776ef1b-91e1-4f6b-80a1-b38b508a7121" 00:09:56.005 ], 00:09:56.005 "product_name": "Malloc disk", 00:09:56.005 "block_size": 512, 00:09:56.005 "num_blocks": 65536, 00:09:56.005 "uuid": "c776ef1b-91e1-4f6b-80a1-b38b508a7121", 00:09:56.005 "assigned_rate_limits": { 00:09:56.005 "rw_ios_per_sec": 0, 00:09:56.005 "rw_mbytes_per_sec": 0, 00:09:56.005 "r_mbytes_per_sec": 0, 00:09:56.005 "w_mbytes_per_sec": 0 00:09:56.005 }, 00:09:56.005 "claimed": true, 00:09:56.005 "claim_type": "exclusive_write", 00:09:56.005 "zoned": false, 00:09:56.005 "supported_io_types": { 00:09:56.005 "read": true, 00:09:56.005 "write": true, 00:09:56.005 "unmap": true, 00:09:56.005 "flush": true, 00:09:56.005 "reset": true, 00:09:56.005 "nvme_admin": false, 00:09:56.005 "nvme_io": false, 00:09:56.005 "nvme_io_md": false, 00:09:56.005 "write_zeroes": true, 00:09:56.005 "zcopy": true, 00:09:56.005 "get_zone_info": false, 00:09:56.005 "zone_management": false, 00:09:56.005 "zone_append": false, 00:09:56.005 "compare": false, 00:09:56.005 "compare_and_write": false, 00:09:56.005 "abort": true, 00:09:56.005 "seek_hole": false, 00:09:56.005 "seek_data": false, 00:09:56.005 "copy": true, 00:09:56.005 "nvme_iov_md": false 00:09:56.005 }, 00:09:56.005 "memory_domains": [ 00:09:56.005 { 00:09:56.005 "dma_device_id": "system", 00:09:56.005 "dma_device_type": 1 00:09:56.005 }, 00:09:56.005 { 00:09:56.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.005 "dma_device_type": 2 00:09:56.005 } 00:09:56.005 ], 00:09:56.005 "driver_specific": {} 00:09:56.005 } 00:09:56.005 ] 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.005 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.005 "name": "Existed_Raid", 00:09:56.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.005 "strip_size_kb": 0, 00:09:56.005 "state": "configuring", 00:09:56.005 "raid_level": "raid1", 00:09:56.005 "superblock": false, 00:09:56.005 "num_base_bdevs": 3, 00:09:56.005 "num_base_bdevs_discovered": 2, 00:09:56.005 "num_base_bdevs_operational": 3, 00:09:56.005 "base_bdevs_list": [ 00:09:56.005 { 00:09:56.005 "name": "BaseBdev1", 00:09:56.005 "uuid": "c776ef1b-91e1-4f6b-80a1-b38b508a7121", 00:09:56.005 "is_configured": true, 00:09:56.005 "data_offset": 0, 00:09:56.005 "data_size": 65536 00:09:56.005 }, 00:09:56.005 { 00:09:56.005 "name": null, 00:09:56.005 "uuid": "a7ad386e-24e1-4f52-8646-b59c4fc308a2", 00:09:56.005 "is_configured": false, 00:09:56.006 "data_offset": 0, 00:09:56.006 "data_size": 65536 00:09:56.006 }, 00:09:56.006 { 00:09:56.006 "name": "BaseBdev3", 00:09:56.006 "uuid": "c69a4e7c-cce9-4fb1-b4d0-a3b4cf772e2a", 00:09:56.006 "is_configured": true, 00:09:56.006 "data_offset": 0, 00:09:56.006 "data_size": 65536 00:09:56.006 } 00:09:56.006 ] 00:09:56.006 }' 00:09:56.006 17:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.006 17:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.575 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.575 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:56.575 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.575 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.575 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.575 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:56.575 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:56.575 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.576 [2024-11-26 17:54:38.274986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.576 "name": "Existed_Raid", 00:09:56.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.576 "strip_size_kb": 0, 00:09:56.576 "state": "configuring", 00:09:56.576 "raid_level": "raid1", 00:09:56.576 "superblock": false, 00:09:56.576 "num_base_bdevs": 3, 00:09:56.576 "num_base_bdevs_discovered": 1, 00:09:56.576 "num_base_bdevs_operational": 3, 00:09:56.576 "base_bdevs_list": [ 00:09:56.576 { 00:09:56.576 "name": "BaseBdev1", 00:09:56.576 "uuid": "c776ef1b-91e1-4f6b-80a1-b38b508a7121", 00:09:56.576 "is_configured": true, 00:09:56.576 "data_offset": 0, 00:09:56.576 "data_size": 65536 00:09:56.576 }, 00:09:56.576 { 00:09:56.576 "name": null, 00:09:56.576 "uuid": "a7ad386e-24e1-4f52-8646-b59c4fc308a2", 00:09:56.576 "is_configured": false, 00:09:56.576 "data_offset": 0, 00:09:56.576 "data_size": 65536 00:09:56.576 }, 00:09:56.576 { 00:09:56.576 "name": null, 00:09:56.576 "uuid": "c69a4e7c-cce9-4fb1-b4d0-a3b4cf772e2a", 00:09:56.576 "is_configured": false, 00:09:56.576 "data_offset": 0, 00:09:56.576 "data_size": 65536 00:09:56.576 } 00:09:56.576 ] 00:09:56.576 }' 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.576 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.143 [2024-11-26 17:54:38.754272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.143 "name": "Existed_Raid", 00:09:57.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.143 "strip_size_kb": 0, 00:09:57.143 "state": "configuring", 00:09:57.143 "raid_level": "raid1", 00:09:57.143 "superblock": false, 00:09:57.143 "num_base_bdevs": 3, 00:09:57.143 "num_base_bdevs_discovered": 2, 00:09:57.143 "num_base_bdevs_operational": 3, 00:09:57.143 "base_bdevs_list": [ 00:09:57.143 { 00:09:57.143 "name": "BaseBdev1", 00:09:57.143 "uuid": "c776ef1b-91e1-4f6b-80a1-b38b508a7121", 00:09:57.143 "is_configured": true, 00:09:57.143 "data_offset": 0, 00:09:57.143 "data_size": 65536 00:09:57.143 }, 00:09:57.143 { 00:09:57.143 "name": null, 00:09:57.143 "uuid": "a7ad386e-24e1-4f52-8646-b59c4fc308a2", 00:09:57.143 "is_configured": false, 00:09:57.143 "data_offset": 0, 00:09:57.143 "data_size": 65536 00:09:57.143 }, 00:09:57.143 { 00:09:57.143 "name": "BaseBdev3", 00:09:57.143 "uuid": "c69a4e7c-cce9-4fb1-b4d0-a3b4cf772e2a", 00:09:57.143 "is_configured": true, 00:09:57.143 "data_offset": 0, 00:09:57.143 "data_size": 65536 00:09:57.143 } 00:09:57.143 ] 00:09:57.143 }' 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.143 17:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.402 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.403 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.403 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.403 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:57.403 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.403 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:57.403 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:57.403 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.403 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.403 [2024-11-26 17:54:39.217517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.662 "name": "Existed_Raid", 00:09:57.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.662 "strip_size_kb": 0, 00:09:57.662 "state": "configuring", 00:09:57.662 "raid_level": "raid1", 00:09:57.662 "superblock": false, 00:09:57.662 "num_base_bdevs": 3, 00:09:57.662 "num_base_bdevs_discovered": 1, 00:09:57.662 "num_base_bdevs_operational": 3, 00:09:57.662 "base_bdevs_list": [ 00:09:57.662 { 00:09:57.662 "name": null, 00:09:57.662 "uuid": "c776ef1b-91e1-4f6b-80a1-b38b508a7121", 00:09:57.662 "is_configured": false, 00:09:57.662 "data_offset": 0, 00:09:57.662 "data_size": 65536 00:09:57.662 }, 00:09:57.662 { 00:09:57.662 "name": null, 00:09:57.662 "uuid": "a7ad386e-24e1-4f52-8646-b59c4fc308a2", 00:09:57.662 "is_configured": false, 00:09:57.662 "data_offset": 0, 00:09:57.662 "data_size": 65536 00:09:57.662 }, 00:09:57.662 { 00:09:57.662 "name": "BaseBdev3", 00:09:57.662 "uuid": "c69a4e7c-cce9-4fb1-b4d0-a3b4cf772e2a", 00:09:57.662 "is_configured": true, 00:09:57.662 "data_offset": 0, 00:09:57.662 "data_size": 65536 00:09:57.662 } 00:09:57.662 ] 00:09:57.662 }' 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.662 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.923 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.923 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:57.923 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.923 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.923 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.183 [2024-11-26 17:54:39.810254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.183 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.183 "name": "Existed_Raid", 00:09:58.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.183 "strip_size_kb": 0, 00:09:58.183 "state": "configuring", 00:09:58.183 "raid_level": "raid1", 00:09:58.183 "superblock": false, 00:09:58.183 "num_base_bdevs": 3, 00:09:58.183 "num_base_bdevs_discovered": 2, 00:09:58.183 "num_base_bdevs_operational": 3, 00:09:58.183 "base_bdevs_list": [ 00:09:58.183 { 00:09:58.183 "name": null, 00:09:58.183 "uuid": "c776ef1b-91e1-4f6b-80a1-b38b508a7121", 00:09:58.183 "is_configured": false, 00:09:58.183 "data_offset": 0, 00:09:58.183 "data_size": 65536 00:09:58.183 }, 00:09:58.183 { 00:09:58.183 "name": "BaseBdev2", 00:09:58.183 "uuid": "a7ad386e-24e1-4f52-8646-b59c4fc308a2", 00:09:58.183 "is_configured": true, 00:09:58.183 "data_offset": 0, 00:09:58.183 "data_size": 65536 00:09:58.183 }, 00:09:58.183 { 00:09:58.183 "name": "BaseBdev3", 00:09:58.183 "uuid": "c69a4e7c-cce9-4fb1-b4d0-a3b4cf772e2a", 00:09:58.184 "is_configured": true, 00:09:58.184 "data_offset": 0, 00:09:58.184 "data_size": 65536 00:09:58.184 } 00:09:58.184 ] 00:09:58.184 }' 00:09:58.184 17:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.184 17:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.444 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.444 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:58.444 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.444 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.444 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.704 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:58.704 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.704 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:58.704 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.704 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.704 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.704 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c776ef1b-91e1-4f6b-80a1-b38b508a7121 00:09:58.704 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.704 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.704 [2024-11-26 17:54:40.419949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:58.704 [2024-11-26 17:54:40.420058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:58.704 [2024-11-26 17:54:40.420068] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:58.704 [2024-11-26 17:54:40.420337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:58.704 [2024-11-26 17:54:40.420538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:58.704 [2024-11-26 17:54:40.420552] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:58.705 [2024-11-26 17:54:40.420887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.705 NewBaseBdev 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.705 [ 00:09:58.705 { 00:09:58.705 "name": "NewBaseBdev", 00:09:58.705 "aliases": [ 00:09:58.705 "c776ef1b-91e1-4f6b-80a1-b38b508a7121" 00:09:58.705 ], 00:09:58.705 "product_name": "Malloc disk", 00:09:58.705 "block_size": 512, 00:09:58.705 "num_blocks": 65536, 00:09:58.705 "uuid": "c776ef1b-91e1-4f6b-80a1-b38b508a7121", 00:09:58.705 "assigned_rate_limits": { 00:09:58.705 "rw_ios_per_sec": 0, 00:09:58.705 "rw_mbytes_per_sec": 0, 00:09:58.705 "r_mbytes_per_sec": 0, 00:09:58.705 "w_mbytes_per_sec": 0 00:09:58.705 }, 00:09:58.705 "claimed": true, 00:09:58.705 "claim_type": "exclusive_write", 00:09:58.705 "zoned": false, 00:09:58.705 "supported_io_types": { 00:09:58.705 "read": true, 00:09:58.705 "write": true, 00:09:58.705 "unmap": true, 00:09:58.705 "flush": true, 00:09:58.705 "reset": true, 00:09:58.705 "nvme_admin": false, 00:09:58.705 "nvme_io": false, 00:09:58.705 "nvme_io_md": false, 00:09:58.705 "write_zeroes": true, 00:09:58.705 "zcopy": true, 00:09:58.705 "get_zone_info": false, 00:09:58.705 "zone_management": false, 00:09:58.705 "zone_append": false, 00:09:58.705 "compare": false, 00:09:58.705 "compare_and_write": false, 00:09:58.705 "abort": true, 00:09:58.705 "seek_hole": false, 00:09:58.705 "seek_data": false, 00:09:58.705 "copy": true, 00:09:58.705 "nvme_iov_md": false 00:09:58.705 }, 00:09:58.705 "memory_domains": [ 00:09:58.705 { 00:09:58.705 "dma_device_id": "system", 00:09:58.705 "dma_device_type": 1 00:09:58.705 }, 00:09:58.705 { 00:09:58.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.705 "dma_device_type": 2 00:09:58.705 } 00:09:58.705 ], 00:09:58.705 "driver_specific": {} 00:09:58.705 } 00:09:58.705 ] 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.705 "name": "Existed_Raid", 00:09:58.705 "uuid": "4cc9196e-5dcc-4b43-9874-2244a9e0cb7b", 00:09:58.705 "strip_size_kb": 0, 00:09:58.705 "state": "online", 00:09:58.705 "raid_level": "raid1", 00:09:58.705 "superblock": false, 00:09:58.705 "num_base_bdevs": 3, 00:09:58.705 "num_base_bdevs_discovered": 3, 00:09:58.705 "num_base_bdevs_operational": 3, 00:09:58.705 "base_bdevs_list": [ 00:09:58.705 { 00:09:58.705 "name": "NewBaseBdev", 00:09:58.705 "uuid": "c776ef1b-91e1-4f6b-80a1-b38b508a7121", 00:09:58.705 "is_configured": true, 00:09:58.705 "data_offset": 0, 00:09:58.705 "data_size": 65536 00:09:58.705 }, 00:09:58.705 { 00:09:58.705 "name": "BaseBdev2", 00:09:58.705 "uuid": "a7ad386e-24e1-4f52-8646-b59c4fc308a2", 00:09:58.705 "is_configured": true, 00:09:58.705 "data_offset": 0, 00:09:58.705 "data_size": 65536 00:09:58.705 }, 00:09:58.705 { 00:09:58.705 "name": "BaseBdev3", 00:09:58.705 "uuid": "c69a4e7c-cce9-4fb1-b4d0-a3b4cf772e2a", 00:09:58.705 "is_configured": true, 00:09:58.705 "data_offset": 0, 00:09:58.705 "data_size": 65536 00:09:58.705 } 00:09:58.705 ] 00:09:58.705 }' 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.705 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.275 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:59.275 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:59.275 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.275 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.275 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.275 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.275 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:59.275 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.275 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.275 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.275 [2024-11-26 17:54:40.911546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.275 17:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.275 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.275 "name": "Existed_Raid", 00:09:59.275 "aliases": [ 00:09:59.275 "4cc9196e-5dcc-4b43-9874-2244a9e0cb7b" 00:09:59.275 ], 00:09:59.275 "product_name": "Raid Volume", 00:09:59.275 "block_size": 512, 00:09:59.275 "num_blocks": 65536, 00:09:59.275 "uuid": "4cc9196e-5dcc-4b43-9874-2244a9e0cb7b", 00:09:59.275 "assigned_rate_limits": { 00:09:59.275 "rw_ios_per_sec": 0, 00:09:59.275 "rw_mbytes_per_sec": 0, 00:09:59.275 "r_mbytes_per_sec": 0, 00:09:59.275 "w_mbytes_per_sec": 0 00:09:59.275 }, 00:09:59.275 "claimed": false, 00:09:59.275 "zoned": false, 00:09:59.275 "supported_io_types": { 00:09:59.275 "read": true, 00:09:59.275 "write": true, 00:09:59.275 "unmap": false, 00:09:59.275 "flush": false, 00:09:59.275 "reset": true, 00:09:59.275 "nvme_admin": false, 00:09:59.275 "nvme_io": false, 00:09:59.275 "nvme_io_md": false, 00:09:59.275 "write_zeroes": true, 00:09:59.275 "zcopy": false, 00:09:59.275 "get_zone_info": false, 00:09:59.275 "zone_management": false, 00:09:59.275 "zone_append": false, 00:09:59.275 "compare": false, 00:09:59.275 "compare_and_write": false, 00:09:59.275 "abort": false, 00:09:59.275 "seek_hole": false, 00:09:59.275 "seek_data": false, 00:09:59.275 "copy": false, 00:09:59.275 "nvme_iov_md": false 00:09:59.275 }, 00:09:59.275 "memory_domains": [ 00:09:59.275 { 00:09:59.275 "dma_device_id": "system", 00:09:59.275 "dma_device_type": 1 00:09:59.275 }, 00:09:59.275 { 00:09:59.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.275 "dma_device_type": 2 00:09:59.275 }, 00:09:59.275 { 00:09:59.275 "dma_device_id": "system", 00:09:59.275 "dma_device_type": 1 00:09:59.275 }, 00:09:59.275 { 00:09:59.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.275 "dma_device_type": 2 00:09:59.276 }, 00:09:59.276 { 00:09:59.276 "dma_device_id": "system", 00:09:59.276 "dma_device_type": 1 00:09:59.276 }, 00:09:59.276 { 00:09:59.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.276 "dma_device_type": 2 00:09:59.276 } 00:09:59.276 ], 00:09:59.276 "driver_specific": { 00:09:59.276 "raid": { 00:09:59.276 "uuid": "4cc9196e-5dcc-4b43-9874-2244a9e0cb7b", 00:09:59.276 "strip_size_kb": 0, 00:09:59.276 "state": "online", 00:09:59.276 "raid_level": "raid1", 00:09:59.276 "superblock": false, 00:09:59.276 "num_base_bdevs": 3, 00:09:59.276 "num_base_bdevs_discovered": 3, 00:09:59.276 "num_base_bdevs_operational": 3, 00:09:59.276 "base_bdevs_list": [ 00:09:59.276 { 00:09:59.276 "name": "NewBaseBdev", 00:09:59.276 "uuid": "c776ef1b-91e1-4f6b-80a1-b38b508a7121", 00:09:59.276 "is_configured": true, 00:09:59.276 "data_offset": 0, 00:09:59.276 "data_size": 65536 00:09:59.276 }, 00:09:59.276 { 00:09:59.276 "name": "BaseBdev2", 00:09:59.276 "uuid": "a7ad386e-24e1-4f52-8646-b59c4fc308a2", 00:09:59.276 "is_configured": true, 00:09:59.276 "data_offset": 0, 00:09:59.276 "data_size": 65536 00:09:59.276 }, 00:09:59.276 { 00:09:59.276 "name": "BaseBdev3", 00:09:59.276 "uuid": "c69a4e7c-cce9-4fb1-b4d0-a3b4cf772e2a", 00:09:59.276 "is_configured": true, 00:09:59.276 "data_offset": 0, 00:09:59.276 "data_size": 65536 00:09:59.276 } 00:09:59.276 ] 00:09:59.276 } 00:09:59.276 } 00:09:59.276 }' 00:09:59.276 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.276 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:59.276 BaseBdev2 00:09:59.276 BaseBdev3' 00:09:59.276 17:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.276 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.276 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.276 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:59.276 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.276 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.276 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.276 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.276 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.276 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.276 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.276 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:59.276 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.276 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.276 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.276 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.535 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.535 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.535 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.535 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:59.535 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.535 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.535 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.535 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.535 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.535 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.535 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.535 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.535 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.535 [2024-11-26 17:54:41.206792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.535 [2024-11-26 17:54:41.206941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.535 [2024-11-26 17:54:41.207109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.535 [2024-11-26 17:54:41.207512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.535 [2024-11-26 17:54:41.207588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:59.536 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.536 17:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67620 00:09:59.536 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67620 ']' 00:09:59.536 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67620 00:09:59.536 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:59.536 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.536 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67620 00:09:59.536 killing process with pid 67620 00:09:59.536 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.536 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.536 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67620' 00:09:59.536 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67620 00:09:59.536 [2024-11-26 17:54:41.246446] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.536 17:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67620 00:09:59.795 [2024-11-26 17:54:41.572259] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:01.175 00:10:01.175 real 0m11.117s 00:10:01.175 user 0m17.589s 00:10:01.175 sys 0m1.791s 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.175 ************************************ 00:10:01.175 END TEST raid_state_function_test 00:10:01.175 ************************************ 00:10:01.175 17:54:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:01.175 17:54:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:01.175 17:54:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.175 17:54:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.175 ************************************ 00:10:01.175 START TEST raid_state_function_test_sb 00:10:01.175 ************************************ 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:01.175 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:01.176 Process raid pid: 68247 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68247 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68247' 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68247 00:10:01.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68247 ']' 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.176 17:54:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.176 [2024-11-26 17:54:42.966097] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:10:01.176 [2024-11-26 17:54:42.966332] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.435 [2024-11-26 17:54:43.147513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.436 [2024-11-26 17:54:43.277825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.696 [2024-11-26 17:54:43.513162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.696 [2024-11-26 17:54:43.513326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.266 [2024-11-26 17:54:43.872658] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.266 [2024-11-26 17:54:43.872827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.266 [2024-11-26 17:54:43.872884] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.266 [2024-11-26 17:54:43.872901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.266 [2024-11-26 17:54:43.872909] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.266 [2024-11-26 17:54:43.872921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.266 "name": "Existed_Raid", 00:10:02.266 "uuid": "0655ca67-1133-46a9-979c-90e23ab853f5", 00:10:02.266 "strip_size_kb": 0, 00:10:02.266 "state": "configuring", 00:10:02.266 "raid_level": "raid1", 00:10:02.266 "superblock": true, 00:10:02.266 "num_base_bdevs": 3, 00:10:02.266 "num_base_bdevs_discovered": 0, 00:10:02.266 "num_base_bdevs_operational": 3, 00:10:02.266 "base_bdevs_list": [ 00:10:02.266 { 00:10:02.266 "name": "BaseBdev1", 00:10:02.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.266 "is_configured": false, 00:10:02.266 "data_offset": 0, 00:10:02.266 "data_size": 0 00:10:02.266 }, 00:10:02.266 { 00:10:02.266 "name": "BaseBdev2", 00:10:02.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.266 "is_configured": false, 00:10:02.266 "data_offset": 0, 00:10:02.266 "data_size": 0 00:10:02.266 }, 00:10:02.266 { 00:10:02.266 "name": "BaseBdev3", 00:10:02.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.266 "is_configured": false, 00:10:02.266 "data_offset": 0, 00:10:02.266 "data_size": 0 00:10:02.266 } 00:10:02.266 ] 00:10:02.266 }' 00:10:02.266 17:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.267 17:54:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.526 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.526 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.526 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.526 [2024-11-26 17:54:44.359796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.527 [2024-11-26 17:54:44.359953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:02.527 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.527 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.527 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.527 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.527 [2024-11-26 17:54:44.371800] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.527 [2024-11-26 17:54:44.371946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.527 [2024-11-26 17:54:44.371980] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.527 [2024-11-26 17:54:44.372006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.527 [2024-11-26 17:54:44.372048] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.527 [2024-11-26 17:54:44.372062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.527 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.527 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:02.527 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.527 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.787 [2024-11-26 17:54:44.426486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.787 BaseBdev1 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.787 [ 00:10:02.787 { 00:10:02.787 "name": "BaseBdev1", 00:10:02.787 "aliases": [ 00:10:02.787 "90ddcf37-f133-4a2f-9bec-3b4f3e004904" 00:10:02.787 ], 00:10:02.787 "product_name": "Malloc disk", 00:10:02.787 "block_size": 512, 00:10:02.787 "num_blocks": 65536, 00:10:02.787 "uuid": "90ddcf37-f133-4a2f-9bec-3b4f3e004904", 00:10:02.787 "assigned_rate_limits": { 00:10:02.787 "rw_ios_per_sec": 0, 00:10:02.787 "rw_mbytes_per_sec": 0, 00:10:02.787 "r_mbytes_per_sec": 0, 00:10:02.787 "w_mbytes_per_sec": 0 00:10:02.787 }, 00:10:02.787 "claimed": true, 00:10:02.787 "claim_type": "exclusive_write", 00:10:02.787 "zoned": false, 00:10:02.787 "supported_io_types": { 00:10:02.787 "read": true, 00:10:02.787 "write": true, 00:10:02.787 "unmap": true, 00:10:02.787 "flush": true, 00:10:02.787 "reset": true, 00:10:02.787 "nvme_admin": false, 00:10:02.787 "nvme_io": false, 00:10:02.787 "nvme_io_md": false, 00:10:02.787 "write_zeroes": true, 00:10:02.787 "zcopy": true, 00:10:02.787 "get_zone_info": false, 00:10:02.787 "zone_management": false, 00:10:02.787 "zone_append": false, 00:10:02.787 "compare": false, 00:10:02.787 "compare_and_write": false, 00:10:02.787 "abort": true, 00:10:02.787 "seek_hole": false, 00:10:02.787 "seek_data": false, 00:10:02.787 "copy": true, 00:10:02.787 "nvme_iov_md": false 00:10:02.787 }, 00:10:02.787 "memory_domains": [ 00:10:02.787 { 00:10:02.787 "dma_device_id": "system", 00:10:02.787 "dma_device_type": 1 00:10:02.787 }, 00:10:02.787 { 00:10:02.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.787 "dma_device_type": 2 00:10:02.787 } 00:10:02.787 ], 00:10:02.787 "driver_specific": {} 00:10:02.787 } 00:10:02.787 ] 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.787 "name": "Existed_Raid", 00:10:02.787 "uuid": "ccd569e5-d518-4a76-943e-fa48e087bab4", 00:10:02.787 "strip_size_kb": 0, 00:10:02.787 "state": "configuring", 00:10:02.787 "raid_level": "raid1", 00:10:02.787 "superblock": true, 00:10:02.787 "num_base_bdevs": 3, 00:10:02.787 "num_base_bdevs_discovered": 1, 00:10:02.787 "num_base_bdevs_operational": 3, 00:10:02.787 "base_bdevs_list": [ 00:10:02.787 { 00:10:02.787 "name": "BaseBdev1", 00:10:02.787 "uuid": "90ddcf37-f133-4a2f-9bec-3b4f3e004904", 00:10:02.787 "is_configured": true, 00:10:02.787 "data_offset": 2048, 00:10:02.787 "data_size": 63488 00:10:02.787 }, 00:10:02.787 { 00:10:02.787 "name": "BaseBdev2", 00:10:02.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.787 "is_configured": false, 00:10:02.787 "data_offset": 0, 00:10:02.787 "data_size": 0 00:10:02.787 }, 00:10:02.787 { 00:10:02.787 "name": "BaseBdev3", 00:10:02.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.787 "is_configured": false, 00:10:02.787 "data_offset": 0, 00:10:02.787 "data_size": 0 00:10:02.787 } 00:10:02.787 ] 00:10:02.787 }' 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.787 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.049 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.049 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.049 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.049 [2024-11-26 17:54:44.901837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.049 [2024-11-26 17:54:44.902042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:03.049 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.049 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:03.049 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.049 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.310 [2024-11-26 17:54:44.913894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.310 [2024-11-26 17:54:44.916260] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.310 [2024-11-26 17:54:44.916359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.310 [2024-11-26 17:54:44.916399] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.310 [2024-11-26 17:54:44.916428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.310 "name": "Existed_Raid", 00:10:03.310 "uuid": "b7e1a03a-337d-44d7-8686-b52b99f56e34", 00:10:03.310 "strip_size_kb": 0, 00:10:03.310 "state": "configuring", 00:10:03.310 "raid_level": "raid1", 00:10:03.310 "superblock": true, 00:10:03.310 "num_base_bdevs": 3, 00:10:03.310 "num_base_bdevs_discovered": 1, 00:10:03.310 "num_base_bdevs_operational": 3, 00:10:03.310 "base_bdevs_list": [ 00:10:03.310 { 00:10:03.310 "name": "BaseBdev1", 00:10:03.310 "uuid": "90ddcf37-f133-4a2f-9bec-3b4f3e004904", 00:10:03.310 "is_configured": true, 00:10:03.310 "data_offset": 2048, 00:10:03.310 "data_size": 63488 00:10:03.310 }, 00:10:03.310 { 00:10:03.310 "name": "BaseBdev2", 00:10:03.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.310 "is_configured": false, 00:10:03.310 "data_offset": 0, 00:10:03.310 "data_size": 0 00:10:03.310 }, 00:10:03.310 { 00:10:03.310 "name": "BaseBdev3", 00:10:03.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.310 "is_configured": false, 00:10:03.310 "data_offset": 0, 00:10:03.310 "data_size": 0 00:10:03.310 } 00:10:03.310 ] 00:10:03.310 }' 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.310 17:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.569 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.569 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.569 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.570 [2024-11-26 17:54:45.415748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.570 BaseBdev2 00:10:03.570 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.570 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:03.570 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:03.570 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.570 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:03.570 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.570 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.570 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.570 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.570 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.829 [ 00:10:03.829 { 00:10:03.829 "name": "BaseBdev2", 00:10:03.829 "aliases": [ 00:10:03.829 "518a1631-de6a-44ae-a14f-c101c396a8cc" 00:10:03.829 ], 00:10:03.829 "product_name": "Malloc disk", 00:10:03.829 "block_size": 512, 00:10:03.829 "num_blocks": 65536, 00:10:03.829 "uuid": "518a1631-de6a-44ae-a14f-c101c396a8cc", 00:10:03.829 "assigned_rate_limits": { 00:10:03.829 "rw_ios_per_sec": 0, 00:10:03.829 "rw_mbytes_per_sec": 0, 00:10:03.829 "r_mbytes_per_sec": 0, 00:10:03.829 "w_mbytes_per_sec": 0 00:10:03.829 }, 00:10:03.829 "claimed": true, 00:10:03.829 "claim_type": "exclusive_write", 00:10:03.829 "zoned": false, 00:10:03.829 "supported_io_types": { 00:10:03.829 "read": true, 00:10:03.829 "write": true, 00:10:03.829 "unmap": true, 00:10:03.829 "flush": true, 00:10:03.829 "reset": true, 00:10:03.829 "nvme_admin": false, 00:10:03.829 "nvme_io": false, 00:10:03.829 "nvme_io_md": false, 00:10:03.829 "write_zeroes": true, 00:10:03.829 "zcopy": true, 00:10:03.829 "get_zone_info": false, 00:10:03.829 "zone_management": false, 00:10:03.829 "zone_append": false, 00:10:03.829 "compare": false, 00:10:03.829 "compare_and_write": false, 00:10:03.829 "abort": true, 00:10:03.829 "seek_hole": false, 00:10:03.829 "seek_data": false, 00:10:03.829 "copy": true, 00:10:03.829 "nvme_iov_md": false 00:10:03.829 }, 00:10:03.829 "memory_domains": [ 00:10:03.829 { 00:10:03.829 "dma_device_id": "system", 00:10:03.829 "dma_device_type": 1 00:10:03.829 }, 00:10:03.829 { 00:10:03.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.829 "dma_device_type": 2 00:10:03.829 } 00:10:03.829 ], 00:10:03.829 "driver_specific": {} 00:10:03.829 } 00:10:03.829 ] 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.829 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.829 "name": "Existed_Raid", 00:10:03.829 "uuid": "b7e1a03a-337d-44d7-8686-b52b99f56e34", 00:10:03.829 "strip_size_kb": 0, 00:10:03.829 "state": "configuring", 00:10:03.829 "raid_level": "raid1", 00:10:03.829 "superblock": true, 00:10:03.829 "num_base_bdevs": 3, 00:10:03.829 "num_base_bdevs_discovered": 2, 00:10:03.829 "num_base_bdevs_operational": 3, 00:10:03.829 "base_bdevs_list": [ 00:10:03.829 { 00:10:03.829 "name": "BaseBdev1", 00:10:03.829 "uuid": "90ddcf37-f133-4a2f-9bec-3b4f3e004904", 00:10:03.829 "is_configured": true, 00:10:03.829 "data_offset": 2048, 00:10:03.829 "data_size": 63488 00:10:03.829 }, 00:10:03.829 { 00:10:03.829 "name": "BaseBdev2", 00:10:03.829 "uuid": "518a1631-de6a-44ae-a14f-c101c396a8cc", 00:10:03.829 "is_configured": true, 00:10:03.830 "data_offset": 2048, 00:10:03.830 "data_size": 63488 00:10:03.830 }, 00:10:03.830 { 00:10:03.830 "name": "BaseBdev3", 00:10:03.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.830 "is_configured": false, 00:10:03.830 "data_offset": 0, 00:10:03.830 "data_size": 0 00:10:03.830 } 00:10:03.830 ] 00:10:03.830 }' 00:10:03.830 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.830 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.090 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.090 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.090 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.090 [2024-11-26 17:54:45.948363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.090 [2024-11-26 17:54:45.948817] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:04.090 [2024-11-26 17:54:45.948924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:04.090 [2024-11-26 17:54:45.949307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:04.090 BaseBdev3 00:10:04.090 [2024-11-26 17:54:45.949554] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:04.090 [2024-11-26 17:54:45.949594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:04.090 [2024-11-26 17:54:45.949795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.351 [ 00:10:04.351 { 00:10:04.351 "name": "BaseBdev3", 00:10:04.351 "aliases": [ 00:10:04.351 "88c2be94-8a3f-428d-962e-7c0b83c5a979" 00:10:04.351 ], 00:10:04.351 "product_name": "Malloc disk", 00:10:04.351 "block_size": 512, 00:10:04.351 "num_blocks": 65536, 00:10:04.351 "uuid": "88c2be94-8a3f-428d-962e-7c0b83c5a979", 00:10:04.351 "assigned_rate_limits": { 00:10:04.351 "rw_ios_per_sec": 0, 00:10:04.351 "rw_mbytes_per_sec": 0, 00:10:04.351 "r_mbytes_per_sec": 0, 00:10:04.351 "w_mbytes_per_sec": 0 00:10:04.351 }, 00:10:04.351 "claimed": true, 00:10:04.351 "claim_type": "exclusive_write", 00:10:04.351 "zoned": false, 00:10:04.351 "supported_io_types": { 00:10:04.351 "read": true, 00:10:04.351 "write": true, 00:10:04.351 "unmap": true, 00:10:04.351 "flush": true, 00:10:04.351 "reset": true, 00:10:04.351 "nvme_admin": false, 00:10:04.351 "nvme_io": false, 00:10:04.351 "nvme_io_md": false, 00:10:04.351 "write_zeroes": true, 00:10:04.351 "zcopy": true, 00:10:04.351 "get_zone_info": false, 00:10:04.351 "zone_management": false, 00:10:04.351 "zone_append": false, 00:10:04.351 "compare": false, 00:10:04.351 "compare_and_write": false, 00:10:04.351 "abort": true, 00:10:04.351 "seek_hole": false, 00:10:04.351 "seek_data": false, 00:10:04.351 "copy": true, 00:10:04.351 "nvme_iov_md": false 00:10:04.351 }, 00:10:04.351 "memory_domains": [ 00:10:04.351 { 00:10:04.351 "dma_device_id": "system", 00:10:04.351 "dma_device_type": 1 00:10:04.351 }, 00:10:04.351 { 00:10:04.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.351 "dma_device_type": 2 00:10:04.351 } 00:10:04.351 ], 00:10:04.351 "driver_specific": {} 00:10:04.351 } 00:10:04.351 ] 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.351 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.352 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.352 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.352 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.352 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.352 17:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.352 17:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.352 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.352 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.352 "name": "Existed_Raid", 00:10:04.352 "uuid": "b7e1a03a-337d-44d7-8686-b52b99f56e34", 00:10:04.352 "strip_size_kb": 0, 00:10:04.352 "state": "online", 00:10:04.352 "raid_level": "raid1", 00:10:04.352 "superblock": true, 00:10:04.352 "num_base_bdevs": 3, 00:10:04.352 "num_base_bdevs_discovered": 3, 00:10:04.352 "num_base_bdevs_operational": 3, 00:10:04.352 "base_bdevs_list": [ 00:10:04.352 { 00:10:04.352 "name": "BaseBdev1", 00:10:04.352 "uuid": "90ddcf37-f133-4a2f-9bec-3b4f3e004904", 00:10:04.352 "is_configured": true, 00:10:04.352 "data_offset": 2048, 00:10:04.352 "data_size": 63488 00:10:04.352 }, 00:10:04.352 { 00:10:04.352 "name": "BaseBdev2", 00:10:04.352 "uuid": "518a1631-de6a-44ae-a14f-c101c396a8cc", 00:10:04.352 "is_configured": true, 00:10:04.352 "data_offset": 2048, 00:10:04.352 "data_size": 63488 00:10:04.352 }, 00:10:04.352 { 00:10:04.352 "name": "BaseBdev3", 00:10:04.352 "uuid": "88c2be94-8a3f-428d-962e-7c0b83c5a979", 00:10:04.352 "is_configured": true, 00:10:04.352 "data_offset": 2048, 00:10:04.352 "data_size": 63488 00:10:04.352 } 00:10:04.352 ] 00:10:04.352 }' 00:10:04.352 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.352 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.612 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.612 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:04.612 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.612 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.612 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.612 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.612 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.613 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.613 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.613 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.873 [2024-11-26 17:54:46.475959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.873 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.873 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.873 "name": "Existed_Raid", 00:10:04.873 "aliases": [ 00:10:04.873 "b7e1a03a-337d-44d7-8686-b52b99f56e34" 00:10:04.873 ], 00:10:04.873 "product_name": "Raid Volume", 00:10:04.873 "block_size": 512, 00:10:04.873 "num_blocks": 63488, 00:10:04.873 "uuid": "b7e1a03a-337d-44d7-8686-b52b99f56e34", 00:10:04.873 "assigned_rate_limits": { 00:10:04.873 "rw_ios_per_sec": 0, 00:10:04.873 "rw_mbytes_per_sec": 0, 00:10:04.873 "r_mbytes_per_sec": 0, 00:10:04.873 "w_mbytes_per_sec": 0 00:10:04.873 }, 00:10:04.873 "claimed": false, 00:10:04.873 "zoned": false, 00:10:04.873 "supported_io_types": { 00:10:04.873 "read": true, 00:10:04.873 "write": true, 00:10:04.873 "unmap": false, 00:10:04.873 "flush": false, 00:10:04.873 "reset": true, 00:10:04.873 "nvme_admin": false, 00:10:04.873 "nvme_io": false, 00:10:04.873 "nvme_io_md": false, 00:10:04.873 "write_zeroes": true, 00:10:04.873 "zcopy": false, 00:10:04.873 "get_zone_info": false, 00:10:04.873 "zone_management": false, 00:10:04.873 "zone_append": false, 00:10:04.873 "compare": false, 00:10:04.873 "compare_and_write": false, 00:10:04.873 "abort": false, 00:10:04.873 "seek_hole": false, 00:10:04.873 "seek_data": false, 00:10:04.873 "copy": false, 00:10:04.873 "nvme_iov_md": false 00:10:04.873 }, 00:10:04.873 "memory_domains": [ 00:10:04.873 { 00:10:04.873 "dma_device_id": "system", 00:10:04.873 "dma_device_type": 1 00:10:04.873 }, 00:10:04.873 { 00:10:04.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.873 "dma_device_type": 2 00:10:04.873 }, 00:10:04.873 { 00:10:04.873 "dma_device_id": "system", 00:10:04.873 "dma_device_type": 1 00:10:04.873 }, 00:10:04.873 { 00:10:04.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.873 "dma_device_type": 2 00:10:04.873 }, 00:10:04.873 { 00:10:04.873 "dma_device_id": "system", 00:10:04.873 "dma_device_type": 1 00:10:04.873 }, 00:10:04.873 { 00:10:04.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.873 "dma_device_type": 2 00:10:04.873 } 00:10:04.873 ], 00:10:04.873 "driver_specific": { 00:10:04.873 "raid": { 00:10:04.873 "uuid": "b7e1a03a-337d-44d7-8686-b52b99f56e34", 00:10:04.873 "strip_size_kb": 0, 00:10:04.873 "state": "online", 00:10:04.873 "raid_level": "raid1", 00:10:04.873 "superblock": true, 00:10:04.873 "num_base_bdevs": 3, 00:10:04.873 "num_base_bdevs_discovered": 3, 00:10:04.873 "num_base_bdevs_operational": 3, 00:10:04.873 "base_bdevs_list": [ 00:10:04.873 { 00:10:04.873 "name": "BaseBdev1", 00:10:04.873 "uuid": "90ddcf37-f133-4a2f-9bec-3b4f3e004904", 00:10:04.873 "is_configured": true, 00:10:04.873 "data_offset": 2048, 00:10:04.873 "data_size": 63488 00:10:04.873 }, 00:10:04.873 { 00:10:04.873 "name": "BaseBdev2", 00:10:04.873 "uuid": "518a1631-de6a-44ae-a14f-c101c396a8cc", 00:10:04.873 "is_configured": true, 00:10:04.873 "data_offset": 2048, 00:10:04.873 "data_size": 63488 00:10:04.873 }, 00:10:04.873 { 00:10:04.873 "name": "BaseBdev3", 00:10:04.873 "uuid": "88c2be94-8a3f-428d-962e-7c0b83c5a979", 00:10:04.873 "is_configured": true, 00:10:04.873 "data_offset": 2048, 00:10:04.873 "data_size": 63488 00:10:04.873 } 00:10:04.873 ] 00:10:04.873 } 00:10:04.873 } 00:10:04.873 }' 00:10:04.873 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.873 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:04.873 BaseBdev2 00:10:04.873 BaseBdev3' 00:10:04.873 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.873 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.873 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.873 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.874 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.134 [2024-11-26 17:54:46.747236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.134 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.134 "name": "Existed_Raid", 00:10:05.134 "uuid": "b7e1a03a-337d-44d7-8686-b52b99f56e34", 00:10:05.134 "strip_size_kb": 0, 00:10:05.134 "state": "online", 00:10:05.134 "raid_level": "raid1", 00:10:05.134 "superblock": true, 00:10:05.134 "num_base_bdevs": 3, 00:10:05.134 "num_base_bdevs_discovered": 2, 00:10:05.134 "num_base_bdevs_operational": 2, 00:10:05.134 "base_bdevs_list": [ 00:10:05.134 { 00:10:05.134 "name": null, 00:10:05.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.134 "is_configured": false, 00:10:05.134 "data_offset": 0, 00:10:05.134 "data_size": 63488 00:10:05.134 }, 00:10:05.134 { 00:10:05.134 "name": "BaseBdev2", 00:10:05.134 "uuid": "518a1631-de6a-44ae-a14f-c101c396a8cc", 00:10:05.134 "is_configured": true, 00:10:05.134 "data_offset": 2048, 00:10:05.134 "data_size": 63488 00:10:05.134 }, 00:10:05.134 { 00:10:05.134 "name": "BaseBdev3", 00:10:05.134 "uuid": "88c2be94-8a3f-428d-962e-7c0b83c5a979", 00:10:05.134 "is_configured": true, 00:10:05.135 "data_offset": 2048, 00:10:05.135 "data_size": 63488 00:10:05.135 } 00:10:05.135 ] 00:10:05.135 }' 00:10:05.135 17:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.135 17:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.704 [2024-11-26 17:54:47.355818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.704 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.704 [2024-11-26 17:54:47.527283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:05.704 [2024-11-26 17:54:47.527513] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.965 [2024-11-26 17:54:47.637387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.965 [2024-11-26 17:54:47.637565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.965 [2024-11-26 17:54:47.637619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.965 BaseBdev2 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.965 [ 00:10:05.965 { 00:10:05.965 "name": "BaseBdev2", 00:10:05.965 "aliases": [ 00:10:05.965 "54e6b646-1e00-40b2-b580-9e189a461dee" 00:10:05.965 ], 00:10:05.965 "product_name": "Malloc disk", 00:10:05.965 "block_size": 512, 00:10:05.965 "num_blocks": 65536, 00:10:05.965 "uuid": "54e6b646-1e00-40b2-b580-9e189a461dee", 00:10:05.965 "assigned_rate_limits": { 00:10:05.965 "rw_ios_per_sec": 0, 00:10:05.965 "rw_mbytes_per_sec": 0, 00:10:05.965 "r_mbytes_per_sec": 0, 00:10:05.965 "w_mbytes_per_sec": 0 00:10:05.965 }, 00:10:05.965 "claimed": false, 00:10:05.965 "zoned": false, 00:10:05.965 "supported_io_types": { 00:10:05.965 "read": true, 00:10:05.965 "write": true, 00:10:05.965 "unmap": true, 00:10:05.965 "flush": true, 00:10:05.965 "reset": true, 00:10:05.965 "nvme_admin": false, 00:10:05.965 "nvme_io": false, 00:10:05.965 "nvme_io_md": false, 00:10:05.965 "write_zeroes": true, 00:10:05.965 "zcopy": true, 00:10:05.965 "get_zone_info": false, 00:10:05.965 "zone_management": false, 00:10:05.965 "zone_append": false, 00:10:05.965 "compare": false, 00:10:05.965 "compare_and_write": false, 00:10:05.965 "abort": true, 00:10:05.965 "seek_hole": false, 00:10:05.965 "seek_data": false, 00:10:05.965 "copy": true, 00:10:05.965 "nvme_iov_md": false 00:10:05.965 }, 00:10:05.965 "memory_domains": [ 00:10:05.965 { 00:10:05.965 "dma_device_id": "system", 00:10:05.965 "dma_device_type": 1 00:10:05.965 }, 00:10:05.965 { 00:10:05.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.965 "dma_device_type": 2 00:10:05.965 } 00:10:05.965 ], 00:10:05.965 "driver_specific": {} 00:10:05.965 } 00:10:05.965 ] 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.965 BaseBdev3 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.965 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.225 [ 00:10:06.225 { 00:10:06.225 "name": "BaseBdev3", 00:10:06.225 "aliases": [ 00:10:06.225 "a0da1162-61f6-4078-b72b-d94e29a8c9d7" 00:10:06.225 ], 00:10:06.225 "product_name": "Malloc disk", 00:10:06.225 "block_size": 512, 00:10:06.225 "num_blocks": 65536, 00:10:06.225 "uuid": "a0da1162-61f6-4078-b72b-d94e29a8c9d7", 00:10:06.225 "assigned_rate_limits": { 00:10:06.225 "rw_ios_per_sec": 0, 00:10:06.225 "rw_mbytes_per_sec": 0, 00:10:06.225 "r_mbytes_per_sec": 0, 00:10:06.225 "w_mbytes_per_sec": 0 00:10:06.225 }, 00:10:06.225 "claimed": false, 00:10:06.225 "zoned": false, 00:10:06.225 "supported_io_types": { 00:10:06.225 "read": true, 00:10:06.225 "write": true, 00:10:06.225 "unmap": true, 00:10:06.225 "flush": true, 00:10:06.225 "reset": true, 00:10:06.225 "nvme_admin": false, 00:10:06.225 "nvme_io": false, 00:10:06.225 "nvme_io_md": false, 00:10:06.225 "write_zeroes": true, 00:10:06.225 "zcopy": true, 00:10:06.225 "get_zone_info": false, 00:10:06.225 "zone_management": false, 00:10:06.225 "zone_append": false, 00:10:06.225 "compare": false, 00:10:06.225 "compare_and_write": false, 00:10:06.225 "abort": true, 00:10:06.225 "seek_hole": false, 00:10:06.225 "seek_data": false, 00:10:06.225 "copy": true, 00:10:06.225 "nvme_iov_md": false 00:10:06.225 }, 00:10:06.225 "memory_domains": [ 00:10:06.225 { 00:10:06.225 "dma_device_id": "system", 00:10:06.225 "dma_device_type": 1 00:10:06.225 }, 00:10:06.225 { 00:10:06.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.225 "dma_device_type": 2 00:10:06.225 } 00:10:06.225 ], 00:10:06.225 "driver_specific": {} 00:10:06.225 } 00:10:06.225 ] 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.225 [2024-11-26 17:54:47.859461] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.225 [2024-11-26 17:54:47.859633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.225 [2024-11-26 17:54:47.859688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.225 [2024-11-26 17:54:47.861826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.225 "name": "Existed_Raid", 00:10:06.225 "uuid": "0f461950-77bc-4225-9344-5b7c055eff30", 00:10:06.225 "strip_size_kb": 0, 00:10:06.225 "state": "configuring", 00:10:06.225 "raid_level": "raid1", 00:10:06.225 "superblock": true, 00:10:06.225 "num_base_bdevs": 3, 00:10:06.225 "num_base_bdevs_discovered": 2, 00:10:06.225 "num_base_bdevs_operational": 3, 00:10:06.225 "base_bdevs_list": [ 00:10:06.225 { 00:10:06.225 "name": "BaseBdev1", 00:10:06.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.225 "is_configured": false, 00:10:06.225 "data_offset": 0, 00:10:06.225 "data_size": 0 00:10:06.225 }, 00:10:06.225 { 00:10:06.225 "name": "BaseBdev2", 00:10:06.225 "uuid": "54e6b646-1e00-40b2-b580-9e189a461dee", 00:10:06.225 "is_configured": true, 00:10:06.225 "data_offset": 2048, 00:10:06.225 "data_size": 63488 00:10:06.225 }, 00:10:06.225 { 00:10:06.225 "name": "BaseBdev3", 00:10:06.225 "uuid": "a0da1162-61f6-4078-b72b-d94e29a8c9d7", 00:10:06.225 "is_configured": true, 00:10:06.225 "data_offset": 2048, 00:10:06.225 "data_size": 63488 00:10:06.225 } 00:10:06.225 ] 00:10:06.225 }' 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.225 17:54:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.485 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:06.485 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.485 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.485 [2024-11-26 17:54:48.298802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.485 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.485 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.485 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.485 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.485 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.485 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.486 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.486 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.486 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.486 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.486 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.486 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.486 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.486 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.486 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.486 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.746 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.746 "name": "Existed_Raid", 00:10:06.746 "uuid": "0f461950-77bc-4225-9344-5b7c055eff30", 00:10:06.746 "strip_size_kb": 0, 00:10:06.746 "state": "configuring", 00:10:06.746 "raid_level": "raid1", 00:10:06.746 "superblock": true, 00:10:06.746 "num_base_bdevs": 3, 00:10:06.746 "num_base_bdevs_discovered": 1, 00:10:06.746 "num_base_bdevs_operational": 3, 00:10:06.746 "base_bdevs_list": [ 00:10:06.746 { 00:10:06.746 "name": "BaseBdev1", 00:10:06.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.746 "is_configured": false, 00:10:06.746 "data_offset": 0, 00:10:06.746 "data_size": 0 00:10:06.746 }, 00:10:06.746 { 00:10:06.746 "name": null, 00:10:06.746 "uuid": "54e6b646-1e00-40b2-b580-9e189a461dee", 00:10:06.746 "is_configured": false, 00:10:06.746 "data_offset": 0, 00:10:06.746 "data_size": 63488 00:10:06.746 }, 00:10:06.746 { 00:10:06.746 "name": "BaseBdev3", 00:10:06.746 "uuid": "a0da1162-61f6-4078-b72b-d94e29a8c9d7", 00:10:06.746 "is_configured": true, 00:10:06.746 "data_offset": 2048, 00:10:06.746 "data_size": 63488 00:10:06.746 } 00:10:06.746 ] 00:10:06.746 }' 00:10:06.746 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.746 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.007 [2024-11-26 17:54:48.859335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.007 BaseBdev1 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.007 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.267 [ 00:10:07.267 { 00:10:07.267 "name": "BaseBdev1", 00:10:07.267 "aliases": [ 00:10:07.267 "b27f741d-7881-4ec8-8bad-3557a025d511" 00:10:07.267 ], 00:10:07.267 "product_name": "Malloc disk", 00:10:07.267 "block_size": 512, 00:10:07.267 "num_blocks": 65536, 00:10:07.267 "uuid": "b27f741d-7881-4ec8-8bad-3557a025d511", 00:10:07.267 "assigned_rate_limits": { 00:10:07.267 "rw_ios_per_sec": 0, 00:10:07.267 "rw_mbytes_per_sec": 0, 00:10:07.267 "r_mbytes_per_sec": 0, 00:10:07.267 "w_mbytes_per_sec": 0 00:10:07.267 }, 00:10:07.267 "claimed": true, 00:10:07.267 "claim_type": "exclusive_write", 00:10:07.267 "zoned": false, 00:10:07.267 "supported_io_types": { 00:10:07.267 "read": true, 00:10:07.267 "write": true, 00:10:07.267 "unmap": true, 00:10:07.267 "flush": true, 00:10:07.267 "reset": true, 00:10:07.267 "nvme_admin": false, 00:10:07.267 "nvme_io": false, 00:10:07.267 "nvme_io_md": false, 00:10:07.267 "write_zeroes": true, 00:10:07.267 "zcopy": true, 00:10:07.267 "get_zone_info": false, 00:10:07.267 "zone_management": false, 00:10:07.267 "zone_append": false, 00:10:07.267 "compare": false, 00:10:07.267 "compare_and_write": false, 00:10:07.267 "abort": true, 00:10:07.267 "seek_hole": false, 00:10:07.267 "seek_data": false, 00:10:07.267 "copy": true, 00:10:07.267 "nvme_iov_md": false 00:10:07.267 }, 00:10:07.267 "memory_domains": [ 00:10:07.267 { 00:10:07.267 "dma_device_id": "system", 00:10:07.267 "dma_device_type": 1 00:10:07.267 }, 00:10:07.267 { 00:10:07.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.267 "dma_device_type": 2 00:10:07.267 } 00:10:07.267 ], 00:10:07.267 "driver_specific": {} 00:10:07.267 } 00:10:07.267 ] 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.267 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.268 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.268 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.268 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.268 "name": "Existed_Raid", 00:10:07.268 "uuid": "0f461950-77bc-4225-9344-5b7c055eff30", 00:10:07.268 "strip_size_kb": 0, 00:10:07.268 "state": "configuring", 00:10:07.268 "raid_level": "raid1", 00:10:07.268 "superblock": true, 00:10:07.268 "num_base_bdevs": 3, 00:10:07.268 "num_base_bdevs_discovered": 2, 00:10:07.268 "num_base_bdevs_operational": 3, 00:10:07.268 "base_bdevs_list": [ 00:10:07.268 { 00:10:07.268 "name": "BaseBdev1", 00:10:07.268 "uuid": "b27f741d-7881-4ec8-8bad-3557a025d511", 00:10:07.268 "is_configured": true, 00:10:07.268 "data_offset": 2048, 00:10:07.268 "data_size": 63488 00:10:07.268 }, 00:10:07.268 { 00:10:07.268 "name": null, 00:10:07.268 "uuid": "54e6b646-1e00-40b2-b580-9e189a461dee", 00:10:07.268 "is_configured": false, 00:10:07.268 "data_offset": 0, 00:10:07.268 "data_size": 63488 00:10:07.268 }, 00:10:07.268 { 00:10:07.268 "name": "BaseBdev3", 00:10:07.268 "uuid": "a0da1162-61f6-4078-b72b-d94e29a8c9d7", 00:10:07.268 "is_configured": true, 00:10:07.268 "data_offset": 2048, 00:10:07.268 "data_size": 63488 00:10:07.268 } 00:10:07.268 ] 00:10:07.268 }' 00:10:07.268 17:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.268 17:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.538 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.538 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.538 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.539 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:07.539 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.539 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:07.539 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:07.539 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.539 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.806 [2024-11-26 17:54:49.402598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.806 "name": "Existed_Raid", 00:10:07.806 "uuid": "0f461950-77bc-4225-9344-5b7c055eff30", 00:10:07.806 "strip_size_kb": 0, 00:10:07.806 "state": "configuring", 00:10:07.806 "raid_level": "raid1", 00:10:07.806 "superblock": true, 00:10:07.806 "num_base_bdevs": 3, 00:10:07.806 "num_base_bdevs_discovered": 1, 00:10:07.806 "num_base_bdevs_operational": 3, 00:10:07.806 "base_bdevs_list": [ 00:10:07.806 { 00:10:07.806 "name": "BaseBdev1", 00:10:07.806 "uuid": "b27f741d-7881-4ec8-8bad-3557a025d511", 00:10:07.806 "is_configured": true, 00:10:07.806 "data_offset": 2048, 00:10:07.806 "data_size": 63488 00:10:07.806 }, 00:10:07.806 { 00:10:07.806 "name": null, 00:10:07.806 "uuid": "54e6b646-1e00-40b2-b580-9e189a461dee", 00:10:07.806 "is_configured": false, 00:10:07.806 "data_offset": 0, 00:10:07.806 "data_size": 63488 00:10:07.806 }, 00:10:07.806 { 00:10:07.806 "name": null, 00:10:07.806 "uuid": "a0da1162-61f6-4078-b72b-d94e29a8c9d7", 00:10:07.806 "is_configured": false, 00:10:07.806 "data_offset": 0, 00:10:07.806 "data_size": 63488 00:10:07.806 } 00:10:07.806 ] 00:10:07.806 }' 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.806 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.132 [2024-11-26 17:54:49.897838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.132 "name": "Existed_Raid", 00:10:08.132 "uuid": "0f461950-77bc-4225-9344-5b7c055eff30", 00:10:08.132 "strip_size_kb": 0, 00:10:08.132 "state": "configuring", 00:10:08.132 "raid_level": "raid1", 00:10:08.132 "superblock": true, 00:10:08.132 "num_base_bdevs": 3, 00:10:08.132 "num_base_bdevs_discovered": 2, 00:10:08.132 "num_base_bdevs_operational": 3, 00:10:08.132 "base_bdevs_list": [ 00:10:08.132 { 00:10:08.132 "name": "BaseBdev1", 00:10:08.132 "uuid": "b27f741d-7881-4ec8-8bad-3557a025d511", 00:10:08.132 "is_configured": true, 00:10:08.132 "data_offset": 2048, 00:10:08.132 "data_size": 63488 00:10:08.132 }, 00:10:08.132 { 00:10:08.132 "name": null, 00:10:08.132 "uuid": "54e6b646-1e00-40b2-b580-9e189a461dee", 00:10:08.132 "is_configured": false, 00:10:08.132 "data_offset": 0, 00:10:08.132 "data_size": 63488 00:10:08.132 }, 00:10:08.132 { 00:10:08.132 "name": "BaseBdev3", 00:10:08.132 "uuid": "a0da1162-61f6-4078-b72b-d94e29a8c9d7", 00:10:08.132 "is_configured": true, 00:10:08.132 "data_offset": 2048, 00:10:08.132 "data_size": 63488 00:10:08.132 } 00:10:08.132 ] 00:10:08.132 }' 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.132 17:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.726 [2024-11-26 17:54:50.389143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.726 "name": "Existed_Raid", 00:10:08.726 "uuid": "0f461950-77bc-4225-9344-5b7c055eff30", 00:10:08.726 "strip_size_kb": 0, 00:10:08.726 "state": "configuring", 00:10:08.726 "raid_level": "raid1", 00:10:08.726 "superblock": true, 00:10:08.726 "num_base_bdevs": 3, 00:10:08.726 "num_base_bdevs_discovered": 1, 00:10:08.726 "num_base_bdevs_operational": 3, 00:10:08.726 "base_bdevs_list": [ 00:10:08.726 { 00:10:08.726 "name": null, 00:10:08.726 "uuid": "b27f741d-7881-4ec8-8bad-3557a025d511", 00:10:08.726 "is_configured": false, 00:10:08.726 "data_offset": 0, 00:10:08.726 "data_size": 63488 00:10:08.726 }, 00:10:08.726 { 00:10:08.726 "name": null, 00:10:08.726 "uuid": "54e6b646-1e00-40b2-b580-9e189a461dee", 00:10:08.726 "is_configured": false, 00:10:08.726 "data_offset": 0, 00:10:08.726 "data_size": 63488 00:10:08.726 }, 00:10:08.726 { 00:10:08.726 "name": "BaseBdev3", 00:10:08.726 "uuid": "a0da1162-61f6-4078-b72b-d94e29a8c9d7", 00:10:08.726 "is_configured": true, 00:10:08.726 "data_offset": 2048, 00:10:08.726 "data_size": 63488 00:10:08.726 } 00:10:08.726 ] 00:10:08.726 }' 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.726 17:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.334 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.334 17:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:09.334 17:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.334 17:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.334 17:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.334 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:09.334 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:09.334 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.334 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.334 [2024-11-26 17:54:51.017131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.334 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.334 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.335 "name": "Existed_Raid", 00:10:09.335 "uuid": "0f461950-77bc-4225-9344-5b7c055eff30", 00:10:09.335 "strip_size_kb": 0, 00:10:09.335 "state": "configuring", 00:10:09.335 "raid_level": "raid1", 00:10:09.335 "superblock": true, 00:10:09.335 "num_base_bdevs": 3, 00:10:09.335 "num_base_bdevs_discovered": 2, 00:10:09.335 "num_base_bdevs_operational": 3, 00:10:09.335 "base_bdevs_list": [ 00:10:09.335 { 00:10:09.335 "name": null, 00:10:09.335 "uuid": "b27f741d-7881-4ec8-8bad-3557a025d511", 00:10:09.335 "is_configured": false, 00:10:09.335 "data_offset": 0, 00:10:09.335 "data_size": 63488 00:10:09.335 }, 00:10:09.335 { 00:10:09.335 "name": "BaseBdev2", 00:10:09.335 "uuid": "54e6b646-1e00-40b2-b580-9e189a461dee", 00:10:09.335 "is_configured": true, 00:10:09.335 "data_offset": 2048, 00:10:09.335 "data_size": 63488 00:10:09.335 }, 00:10:09.335 { 00:10:09.335 "name": "BaseBdev3", 00:10:09.335 "uuid": "a0da1162-61f6-4078-b72b-d94e29a8c9d7", 00:10:09.335 "is_configured": true, 00:10:09.335 "data_offset": 2048, 00:10:09.335 "data_size": 63488 00:10:09.335 } 00:10:09.335 ] 00:10:09.335 }' 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.335 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b27f741d-7881-4ec8-8bad-3557a025d511 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.910 [2024-11-26 17:54:51.607735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:09.910 [2024-11-26 17:54:51.608155] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:09.910 [2024-11-26 17:54:51.608217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:09.910 NewBaseBdev 00:10:09.910 [2024-11-26 17:54:51.608577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:09.910 [2024-11-26 17:54:51.608805] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:09.910 [2024-11-26 17:54:51.608826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:09.910 [2024-11-26 17:54:51.609032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.910 [ 00:10:09.910 { 00:10:09.910 "name": "NewBaseBdev", 00:10:09.910 "aliases": [ 00:10:09.910 "b27f741d-7881-4ec8-8bad-3557a025d511" 00:10:09.910 ], 00:10:09.910 "product_name": "Malloc disk", 00:10:09.910 "block_size": 512, 00:10:09.910 "num_blocks": 65536, 00:10:09.910 "uuid": "b27f741d-7881-4ec8-8bad-3557a025d511", 00:10:09.910 "assigned_rate_limits": { 00:10:09.910 "rw_ios_per_sec": 0, 00:10:09.910 "rw_mbytes_per_sec": 0, 00:10:09.910 "r_mbytes_per_sec": 0, 00:10:09.910 "w_mbytes_per_sec": 0 00:10:09.910 }, 00:10:09.910 "claimed": true, 00:10:09.910 "claim_type": "exclusive_write", 00:10:09.910 "zoned": false, 00:10:09.910 "supported_io_types": { 00:10:09.910 "read": true, 00:10:09.910 "write": true, 00:10:09.910 "unmap": true, 00:10:09.910 "flush": true, 00:10:09.910 "reset": true, 00:10:09.910 "nvme_admin": false, 00:10:09.910 "nvme_io": false, 00:10:09.910 "nvme_io_md": false, 00:10:09.910 "write_zeroes": true, 00:10:09.910 "zcopy": true, 00:10:09.910 "get_zone_info": false, 00:10:09.910 "zone_management": false, 00:10:09.910 "zone_append": false, 00:10:09.910 "compare": false, 00:10:09.910 "compare_and_write": false, 00:10:09.910 "abort": true, 00:10:09.910 "seek_hole": false, 00:10:09.910 "seek_data": false, 00:10:09.910 "copy": true, 00:10:09.910 "nvme_iov_md": false 00:10:09.910 }, 00:10:09.910 "memory_domains": [ 00:10:09.910 { 00:10:09.910 "dma_device_id": "system", 00:10:09.910 "dma_device_type": 1 00:10:09.910 }, 00:10:09.910 { 00:10:09.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.910 "dma_device_type": 2 00:10:09.910 } 00:10:09.910 ], 00:10:09.910 "driver_specific": {} 00:10:09.910 } 00:10:09.910 ] 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.910 "name": "Existed_Raid", 00:10:09.910 "uuid": "0f461950-77bc-4225-9344-5b7c055eff30", 00:10:09.910 "strip_size_kb": 0, 00:10:09.910 "state": "online", 00:10:09.910 "raid_level": "raid1", 00:10:09.910 "superblock": true, 00:10:09.910 "num_base_bdevs": 3, 00:10:09.910 "num_base_bdevs_discovered": 3, 00:10:09.910 "num_base_bdevs_operational": 3, 00:10:09.910 "base_bdevs_list": [ 00:10:09.910 { 00:10:09.910 "name": "NewBaseBdev", 00:10:09.910 "uuid": "b27f741d-7881-4ec8-8bad-3557a025d511", 00:10:09.910 "is_configured": true, 00:10:09.910 "data_offset": 2048, 00:10:09.910 "data_size": 63488 00:10:09.910 }, 00:10:09.910 { 00:10:09.910 "name": "BaseBdev2", 00:10:09.910 "uuid": "54e6b646-1e00-40b2-b580-9e189a461dee", 00:10:09.910 "is_configured": true, 00:10:09.910 "data_offset": 2048, 00:10:09.910 "data_size": 63488 00:10:09.910 }, 00:10:09.910 { 00:10:09.910 "name": "BaseBdev3", 00:10:09.910 "uuid": "a0da1162-61f6-4078-b72b-d94e29a8c9d7", 00:10:09.910 "is_configured": true, 00:10:09.910 "data_offset": 2048, 00:10:09.910 "data_size": 63488 00:10:09.910 } 00:10:09.910 ] 00:10:09.910 }' 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.910 17:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.267 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.267 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.267 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.267 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.267 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.267 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.267 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.267 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.267 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.267 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.533 [2024-11-26 17:54:52.055495] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.534 "name": "Existed_Raid", 00:10:10.534 "aliases": [ 00:10:10.534 "0f461950-77bc-4225-9344-5b7c055eff30" 00:10:10.534 ], 00:10:10.534 "product_name": "Raid Volume", 00:10:10.534 "block_size": 512, 00:10:10.534 "num_blocks": 63488, 00:10:10.534 "uuid": "0f461950-77bc-4225-9344-5b7c055eff30", 00:10:10.534 "assigned_rate_limits": { 00:10:10.534 "rw_ios_per_sec": 0, 00:10:10.534 "rw_mbytes_per_sec": 0, 00:10:10.534 "r_mbytes_per_sec": 0, 00:10:10.534 "w_mbytes_per_sec": 0 00:10:10.534 }, 00:10:10.534 "claimed": false, 00:10:10.534 "zoned": false, 00:10:10.534 "supported_io_types": { 00:10:10.534 "read": true, 00:10:10.534 "write": true, 00:10:10.534 "unmap": false, 00:10:10.534 "flush": false, 00:10:10.534 "reset": true, 00:10:10.534 "nvme_admin": false, 00:10:10.534 "nvme_io": false, 00:10:10.534 "nvme_io_md": false, 00:10:10.534 "write_zeroes": true, 00:10:10.534 "zcopy": false, 00:10:10.534 "get_zone_info": false, 00:10:10.534 "zone_management": false, 00:10:10.534 "zone_append": false, 00:10:10.534 "compare": false, 00:10:10.534 "compare_and_write": false, 00:10:10.534 "abort": false, 00:10:10.534 "seek_hole": false, 00:10:10.534 "seek_data": false, 00:10:10.534 "copy": false, 00:10:10.534 "nvme_iov_md": false 00:10:10.534 }, 00:10:10.534 "memory_domains": [ 00:10:10.534 { 00:10:10.534 "dma_device_id": "system", 00:10:10.534 "dma_device_type": 1 00:10:10.534 }, 00:10:10.534 { 00:10:10.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.534 "dma_device_type": 2 00:10:10.534 }, 00:10:10.534 { 00:10:10.534 "dma_device_id": "system", 00:10:10.534 "dma_device_type": 1 00:10:10.534 }, 00:10:10.534 { 00:10:10.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.534 "dma_device_type": 2 00:10:10.534 }, 00:10:10.534 { 00:10:10.534 "dma_device_id": "system", 00:10:10.534 "dma_device_type": 1 00:10:10.534 }, 00:10:10.534 { 00:10:10.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.534 "dma_device_type": 2 00:10:10.534 } 00:10:10.534 ], 00:10:10.534 "driver_specific": { 00:10:10.534 "raid": { 00:10:10.534 "uuid": "0f461950-77bc-4225-9344-5b7c055eff30", 00:10:10.534 "strip_size_kb": 0, 00:10:10.534 "state": "online", 00:10:10.534 "raid_level": "raid1", 00:10:10.534 "superblock": true, 00:10:10.534 "num_base_bdevs": 3, 00:10:10.534 "num_base_bdevs_discovered": 3, 00:10:10.534 "num_base_bdevs_operational": 3, 00:10:10.534 "base_bdevs_list": [ 00:10:10.534 { 00:10:10.534 "name": "NewBaseBdev", 00:10:10.534 "uuid": "b27f741d-7881-4ec8-8bad-3557a025d511", 00:10:10.534 "is_configured": true, 00:10:10.534 "data_offset": 2048, 00:10:10.534 "data_size": 63488 00:10:10.534 }, 00:10:10.534 { 00:10:10.534 "name": "BaseBdev2", 00:10:10.534 "uuid": "54e6b646-1e00-40b2-b580-9e189a461dee", 00:10:10.534 "is_configured": true, 00:10:10.534 "data_offset": 2048, 00:10:10.534 "data_size": 63488 00:10:10.534 }, 00:10:10.534 { 00:10:10.534 "name": "BaseBdev3", 00:10:10.534 "uuid": "a0da1162-61f6-4078-b72b-d94e29a8c9d7", 00:10:10.534 "is_configured": true, 00:10:10.534 "data_offset": 2048, 00:10:10.534 "data_size": 63488 00:10:10.534 } 00:10:10.534 ] 00:10:10.534 } 00:10:10.534 } 00:10:10.534 }' 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:10.534 BaseBdev2 00:10:10.534 BaseBdev3' 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.534 [2024-11-26 17:54:52.330658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.534 [2024-11-26 17:54:52.330781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.534 [2024-11-26 17:54:52.330895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.534 [2024-11-26 17:54:52.331263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.534 [2024-11-26 17:54:52.331278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68247 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68247 ']' 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68247 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68247 00:10:10.534 killing process with pid 68247 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68247' 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68247 00:10:10.534 [2024-11-26 17:54:52.378883] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.534 17:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68247 00:10:11.111 [2024-11-26 17:54:52.742832] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.519 17:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:12.519 00:10:12.519 real 0m11.131s 00:10:12.519 user 0m17.523s 00:10:12.519 sys 0m1.995s 00:10:12.519 ************************************ 00:10:12.519 END TEST raid_state_function_test_sb 00:10:12.519 ************************************ 00:10:12.519 17:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.519 17:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.519 17:54:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:12.519 17:54:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:12.519 17:54:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.519 17:54:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.519 ************************************ 00:10:12.519 START TEST raid_superblock_test 00:10:12.519 ************************************ 00:10:12.519 17:54:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68873 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68873 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68873 ']' 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.520 17:54:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.520 [2024-11-26 17:54:54.169720] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:10:12.520 [2024-11-26 17:54:54.169974] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68873 ] 00:10:12.520 [2024-11-26 17:54:54.334498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.780 [2024-11-26 17:54:54.468764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.040 [2024-11-26 17:54:54.692009] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.040 [2024-11-26 17:54:54.692064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.299 malloc1 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.299 [2024-11-26 17:54:55.137874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:13.299 [2024-11-26 17:54:55.137986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.299 [2024-11-26 17:54:55.138039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:13.299 [2024-11-26 17:54:55.138053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.299 [2024-11-26 17:54:55.140741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.299 [2024-11-26 17:54:55.140810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:13.299 pt1 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.299 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.559 malloc2 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.559 [2024-11-26 17:54:55.202095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:13.559 [2024-11-26 17:54:55.202312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.559 [2024-11-26 17:54:55.202373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:13.559 [2024-11-26 17:54:55.202428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.559 [2024-11-26 17:54:55.205207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.559 [2024-11-26 17:54:55.205269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:13.559 pt2 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.559 malloc3 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.559 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.560 [2024-11-26 17:54:55.275013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:13.560 [2024-11-26 17:54:55.275204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.560 [2024-11-26 17:54:55.275273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:13.560 [2024-11-26 17:54:55.275306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.560 [2024-11-26 17:54:55.277952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.560 [2024-11-26 17:54:55.278114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:13.560 pt3 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.560 [2024-11-26 17:54:55.287123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:13.560 [2024-11-26 17:54:55.289406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:13.560 [2024-11-26 17:54:55.289602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:13.560 [2024-11-26 17:54:55.289846] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:13.560 [2024-11-26 17:54:55.289913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:13.560 [2024-11-26 17:54:55.290286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:13.560 [2024-11-26 17:54:55.290539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:13.560 [2024-11-26 17:54:55.290594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:13.560 [2024-11-26 17:54:55.290856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.560 "name": "raid_bdev1", 00:10:13.560 "uuid": "e1b2c406-3bff-487f-b4b1-a6b785a2cf3a", 00:10:13.560 "strip_size_kb": 0, 00:10:13.560 "state": "online", 00:10:13.560 "raid_level": "raid1", 00:10:13.560 "superblock": true, 00:10:13.560 "num_base_bdevs": 3, 00:10:13.560 "num_base_bdevs_discovered": 3, 00:10:13.560 "num_base_bdevs_operational": 3, 00:10:13.560 "base_bdevs_list": [ 00:10:13.560 { 00:10:13.560 "name": "pt1", 00:10:13.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.560 "is_configured": true, 00:10:13.560 "data_offset": 2048, 00:10:13.560 "data_size": 63488 00:10:13.560 }, 00:10:13.560 { 00:10:13.560 "name": "pt2", 00:10:13.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.560 "is_configured": true, 00:10:13.560 "data_offset": 2048, 00:10:13.560 "data_size": 63488 00:10:13.560 }, 00:10:13.560 { 00:10:13.560 "name": "pt3", 00:10:13.560 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.560 "is_configured": true, 00:10:13.560 "data_offset": 2048, 00:10:13.560 "data_size": 63488 00:10:13.560 } 00:10:13.560 ] 00:10:13.560 }' 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.560 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.130 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:14.130 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:14.130 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:14.130 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:14.130 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:14.130 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:14.130 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:14.130 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.130 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.130 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.130 [2024-11-26 17:54:55.762667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.130 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.130 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.130 "name": "raid_bdev1", 00:10:14.130 "aliases": [ 00:10:14.130 "e1b2c406-3bff-487f-b4b1-a6b785a2cf3a" 00:10:14.130 ], 00:10:14.130 "product_name": "Raid Volume", 00:10:14.130 "block_size": 512, 00:10:14.130 "num_blocks": 63488, 00:10:14.130 "uuid": "e1b2c406-3bff-487f-b4b1-a6b785a2cf3a", 00:10:14.130 "assigned_rate_limits": { 00:10:14.130 "rw_ios_per_sec": 0, 00:10:14.130 "rw_mbytes_per_sec": 0, 00:10:14.130 "r_mbytes_per_sec": 0, 00:10:14.130 "w_mbytes_per_sec": 0 00:10:14.130 }, 00:10:14.130 "claimed": false, 00:10:14.130 "zoned": false, 00:10:14.130 "supported_io_types": { 00:10:14.130 "read": true, 00:10:14.130 "write": true, 00:10:14.130 "unmap": false, 00:10:14.130 "flush": false, 00:10:14.130 "reset": true, 00:10:14.130 "nvme_admin": false, 00:10:14.130 "nvme_io": false, 00:10:14.130 "nvme_io_md": false, 00:10:14.130 "write_zeroes": true, 00:10:14.130 "zcopy": false, 00:10:14.130 "get_zone_info": false, 00:10:14.130 "zone_management": false, 00:10:14.130 "zone_append": false, 00:10:14.130 "compare": false, 00:10:14.130 "compare_and_write": false, 00:10:14.130 "abort": false, 00:10:14.130 "seek_hole": false, 00:10:14.130 "seek_data": false, 00:10:14.130 "copy": false, 00:10:14.130 "nvme_iov_md": false 00:10:14.130 }, 00:10:14.130 "memory_domains": [ 00:10:14.130 { 00:10:14.130 "dma_device_id": "system", 00:10:14.130 "dma_device_type": 1 00:10:14.130 }, 00:10:14.130 { 00:10:14.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.130 "dma_device_type": 2 00:10:14.130 }, 00:10:14.130 { 00:10:14.130 "dma_device_id": "system", 00:10:14.130 "dma_device_type": 1 00:10:14.130 }, 00:10:14.130 { 00:10:14.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.130 "dma_device_type": 2 00:10:14.130 }, 00:10:14.130 { 00:10:14.130 "dma_device_id": "system", 00:10:14.131 "dma_device_type": 1 00:10:14.131 }, 00:10:14.131 { 00:10:14.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.131 "dma_device_type": 2 00:10:14.131 } 00:10:14.131 ], 00:10:14.131 "driver_specific": { 00:10:14.131 "raid": { 00:10:14.131 "uuid": "e1b2c406-3bff-487f-b4b1-a6b785a2cf3a", 00:10:14.131 "strip_size_kb": 0, 00:10:14.131 "state": "online", 00:10:14.131 "raid_level": "raid1", 00:10:14.131 "superblock": true, 00:10:14.131 "num_base_bdevs": 3, 00:10:14.131 "num_base_bdevs_discovered": 3, 00:10:14.131 "num_base_bdevs_operational": 3, 00:10:14.131 "base_bdevs_list": [ 00:10:14.131 { 00:10:14.131 "name": "pt1", 00:10:14.131 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.131 "is_configured": true, 00:10:14.131 "data_offset": 2048, 00:10:14.131 "data_size": 63488 00:10:14.131 }, 00:10:14.131 { 00:10:14.131 "name": "pt2", 00:10:14.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.131 "is_configured": true, 00:10:14.131 "data_offset": 2048, 00:10:14.131 "data_size": 63488 00:10:14.131 }, 00:10:14.131 { 00:10:14.131 "name": "pt3", 00:10:14.131 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.131 "is_configured": true, 00:10:14.131 "data_offset": 2048, 00:10:14.131 "data_size": 63488 00:10:14.131 } 00:10:14.131 ] 00:10:14.131 } 00:10:14.131 } 00:10:14.131 }' 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:14.131 pt2 00:10:14.131 pt3' 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.131 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.391 17:54:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.391 [2024-11-26 17:54:56.014173] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e1b2c406-3bff-487f-b4b1-a6b785a2cf3a 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e1b2c406-3bff-487f-b4b1-a6b785a2cf3a ']' 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.391 [2024-11-26 17:54:56.045775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.391 [2024-11-26 17:54:56.045824] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.391 [2024-11-26 17:54:56.045933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.391 [2024-11-26 17:54:56.046033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.391 [2024-11-26 17:54:56.046046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:14.391 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.392 [2024-11-26 17:54:56.193580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:14.392 [2024-11-26 17:54:56.195838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:14.392 [2024-11-26 17:54:56.196007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:14.392 [2024-11-26 17:54:56.196097] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:14.392 [2024-11-26 17:54:56.196194] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:14.392 [2024-11-26 17:54:56.196221] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:14.392 [2024-11-26 17:54:56.196242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.392 [2024-11-26 17:54:56.196254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:14.392 request: 00:10:14.392 { 00:10:14.392 "name": "raid_bdev1", 00:10:14.392 "raid_level": "raid1", 00:10:14.392 "base_bdevs": [ 00:10:14.392 "malloc1", 00:10:14.392 "malloc2", 00:10:14.392 "malloc3" 00:10:14.392 ], 00:10:14.392 "superblock": false, 00:10:14.392 "method": "bdev_raid_create", 00:10:14.392 "req_id": 1 00:10:14.392 } 00:10:14.392 Got JSON-RPC error response 00:10:14.392 response: 00:10:14.392 { 00:10:14.392 "code": -17, 00:10:14.392 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:14.392 } 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.392 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.653 [2024-11-26 17:54:56.253431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:14.653 [2024-11-26 17:54:56.253597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.653 [2024-11-26 17:54:56.253654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:14.653 [2024-11-26 17:54:56.253697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.653 [2024-11-26 17:54:56.256429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.653 [2024-11-26 17:54:56.256568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:14.653 [2024-11-26 17:54:56.256740] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:14.653 [2024-11-26 17:54:56.256866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:14.653 pt1 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.653 "name": "raid_bdev1", 00:10:14.653 "uuid": "e1b2c406-3bff-487f-b4b1-a6b785a2cf3a", 00:10:14.653 "strip_size_kb": 0, 00:10:14.653 "state": "configuring", 00:10:14.653 "raid_level": "raid1", 00:10:14.653 "superblock": true, 00:10:14.653 "num_base_bdevs": 3, 00:10:14.653 "num_base_bdevs_discovered": 1, 00:10:14.653 "num_base_bdevs_operational": 3, 00:10:14.653 "base_bdevs_list": [ 00:10:14.653 { 00:10:14.653 "name": "pt1", 00:10:14.653 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.653 "is_configured": true, 00:10:14.653 "data_offset": 2048, 00:10:14.653 "data_size": 63488 00:10:14.653 }, 00:10:14.653 { 00:10:14.653 "name": null, 00:10:14.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.653 "is_configured": false, 00:10:14.653 "data_offset": 2048, 00:10:14.653 "data_size": 63488 00:10:14.653 }, 00:10:14.653 { 00:10:14.653 "name": null, 00:10:14.653 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.653 "is_configured": false, 00:10:14.653 "data_offset": 2048, 00:10:14.653 "data_size": 63488 00:10:14.653 } 00:10:14.653 ] 00:10:14.653 }' 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.653 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.913 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:14.913 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:14.913 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.913 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.913 [2024-11-26 17:54:56.757053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:14.913 [2024-11-26 17:54:56.757147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.913 [2024-11-26 17:54:56.757173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:14.913 [2024-11-26 17:54:56.757183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.913 [2024-11-26 17:54:56.757727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.913 [2024-11-26 17:54:56.757759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:14.913 [2024-11-26 17:54:56.757868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:14.913 [2024-11-26 17:54:56.757894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:14.913 pt2 00:10:14.913 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.913 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:14.913 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.913 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.913 [2024-11-26 17:54:56.769133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:14.913 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.913 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.174 "name": "raid_bdev1", 00:10:15.174 "uuid": "e1b2c406-3bff-487f-b4b1-a6b785a2cf3a", 00:10:15.174 "strip_size_kb": 0, 00:10:15.174 "state": "configuring", 00:10:15.174 "raid_level": "raid1", 00:10:15.174 "superblock": true, 00:10:15.174 "num_base_bdevs": 3, 00:10:15.174 "num_base_bdevs_discovered": 1, 00:10:15.174 "num_base_bdevs_operational": 3, 00:10:15.174 "base_bdevs_list": [ 00:10:15.174 { 00:10:15.174 "name": "pt1", 00:10:15.174 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.174 "is_configured": true, 00:10:15.174 "data_offset": 2048, 00:10:15.174 "data_size": 63488 00:10:15.174 }, 00:10:15.174 { 00:10:15.174 "name": null, 00:10:15.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.174 "is_configured": false, 00:10:15.174 "data_offset": 0, 00:10:15.174 "data_size": 63488 00:10:15.174 }, 00:10:15.174 { 00:10:15.174 "name": null, 00:10:15.174 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.174 "is_configured": false, 00:10:15.174 "data_offset": 2048, 00:10:15.174 "data_size": 63488 00:10:15.174 } 00:10:15.174 ] 00:10:15.174 }' 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.174 17:54:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:15.435 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.435 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:15.435 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.435 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 [2024-11-26 17:54:57.284418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:15.435 [2024-11-26 17:54:57.284529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.435 [2024-11-26 17:54:57.284555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:15.435 [2024-11-26 17:54:57.284568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.435 [2024-11-26 17:54:57.285172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.435 [2024-11-26 17:54:57.285212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:15.435 [2024-11-26 17:54:57.285318] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:15.435 [2024-11-26 17:54:57.285361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:15.435 pt2 00:10:15.435 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.435 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:15.435 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.435 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:15.435 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.435 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.696 [2024-11-26 17:54:57.296429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:15.696 [2024-11-26 17:54:57.296536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.696 [2024-11-26 17:54:57.296557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:15.696 [2024-11-26 17:54:57.296569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.696 [2024-11-26 17:54:57.297133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.696 [2024-11-26 17:54:57.297246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:15.696 [2024-11-26 17:54:57.297358] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:15.696 [2024-11-26 17:54:57.297389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:15.696 [2024-11-26 17:54:57.297550] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:15.696 [2024-11-26 17:54:57.297566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:15.696 [2024-11-26 17:54:57.297857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:15.696 [2024-11-26 17:54:57.298053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:15.696 [2024-11-26 17:54:57.298064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:15.696 [2024-11-26 17:54:57.298242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.696 pt3 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.696 "name": "raid_bdev1", 00:10:15.696 "uuid": "e1b2c406-3bff-487f-b4b1-a6b785a2cf3a", 00:10:15.696 "strip_size_kb": 0, 00:10:15.696 "state": "online", 00:10:15.696 "raid_level": "raid1", 00:10:15.696 "superblock": true, 00:10:15.696 "num_base_bdevs": 3, 00:10:15.696 "num_base_bdevs_discovered": 3, 00:10:15.696 "num_base_bdevs_operational": 3, 00:10:15.696 "base_bdevs_list": [ 00:10:15.696 { 00:10:15.696 "name": "pt1", 00:10:15.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.696 "is_configured": true, 00:10:15.696 "data_offset": 2048, 00:10:15.696 "data_size": 63488 00:10:15.696 }, 00:10:15.696 { 00:10:15.696 "name": "pt2", 00:10:15.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.696 "is_configured": true, 00:10:15.696 "data_offset": 2048, 00:10:15.696 "data_size": 63488 00:10:15.696 }, 00:10:15.696 { 00:10:15.696 "name": "pt3", 00:10:15.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.696 "is_configured": true, 00:10:15.696 "data_offset": 2048, 00:10:15.696 "data_size": 63488 00:10:15.696 } 00:10:15.696 ] 00:10:15.696 }' 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.696 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.956 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:15.956 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:15.956 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.956 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.956 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.956 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.956 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.956 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.956 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.956 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.956 [2024-11-26 17:54:57.787999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.956 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.215 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.215 "name": "raid_bdev1", 00:10:16.215 "aliases": [ 00:10:16.215 "e1b2c406-3bff-487f-b4b1-a6b785a2cf3a" 00:10:16.215 ], 00:10:16.215 "product_name": "Raid Volume", 00:10:16.215 "block_size": 512, 00:10:16.215 "num_blocks": 63488, 00:10:16.215 "uuid": "e1b2c406-3bff-487f-b4b1-a6b785a2cf3a", 00:10:16.215 "assigned_rate_limits": { 00:10:16.215 "rw_ios_per_sec": 0, 00:10:16.215 "rw_mbytes_per_sec": 0, 00:10:16.215 "r_mbytes_per_sec": 0, 00:10:16.215 "w_mbytes_per_sec": 0 00:10:16.215 }, 00:10:16.215 "claimed": false, 00:10:16.215 "zoned": false, 00:10:16.216 "supported_io_types": { 00:10:16.216 "read": true, 00:10:16.216 "write": true, 00:10:16.216 "unmap": false, 00:10:16.216 "flush": false, 00:10:16.216 "reset": true, 00:10:16.216 "nvme_admin": false, 00:10:16.216 "nvme_io": false, 00:10:16.216 "nvme_io_md": false, 00:10:16.216 "write_zeroes": true, 00:10:16.216 "zcopy": false, 00:10:16.216 "get_zone_info": false, 00:10:16.216 "zone_management": false, 00:10:16.216 "zone_append": false, 00:10:16.216 "compare": false, 00:10:16.216 "compare_and_write": false, 00:10:16.216 "abort": false, 00:10:16.216 "seek_hole": false, 00:10:16.216 "seek_data": false, 00:10:16.216 "copy": false, 00:10:16.216 "nvme_iov_md": false 00:10:16.216 }, 00:10:16.216 "memory_domains": [ 00:10:16.216 { 00:10:16.216 "dma_device_id": "system", 00:10:16.216 "dma_device_type": 1 00:10:16.216 }, 00:10:16.216 { 00:10:16.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.216 "dma_device_type": 2 00:10:16.216 }, 00:10:16.216 { 00:10:16.216 "dma_device_id": "system", 00:10:16.216 "dma_device_type": 1 00:10:16.216 }, 00:10:16.216 { 00:10:16.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.216 "dma_device_type": 2 00:10:16.216 }, 00:10:16.216 { 00:10:16.216 "dma_device_id": "system", 00:10:16.216 "dma_device_type": 1 00:10:16.216 }, 00:10:16.216 { 00:10:16.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.216 "dma_device_type": 2 00:10:16.216 } 00:10:16.216 ], 00:10:16.216 "driver_specific": { 00:10:16.216 "raid": { 00:10:16.216 "uuid": "e1b2c406-3bff-487f-b4b1-a6b785a2cf3a", 00:10:16.216 "strip_size_kb": 0, 00:10:16.216 "state": "online", 00:10:16.216 "raid_level": "raid1", 00:10:16.216 "superblock": true, 00:10:16.216 "num_base_bdevs": 3, 00:10:16.216 "num_base_bdevs_discovered": 3, 00:10:16.216 "num_base_bdevs_operational": 3, 00:10:16.216 "base_bdevs_list": [ 00:10:16.216 { 00:10:16.216 "name": "pt1", 00:10:16.216 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.216 "is_configured": true, 00:10:16.216 "data_offset": 2048, 00:10:16.216 "data_size": 63488 00:10:16.216 }, 00:10:16.216 { 00:10:16.216 "name": "pt2", 00:10:16.216 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.216 "is_configured": true, 00:10:16.216 "data_offset": 2048, 00:10:16.216 "data_size": 63488 00:10:16.216 }, 00:10:16.216 { 00:10:16.216 "name": "pt3", 00:10:16.216 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.216 "is_configured": true, 00:10:16.216 "data_offset": 2048, 00:10:16.216 "data_size": 63488 00:10:16.216 } 00:10:16.216 ] 00:10:16.216 } 00:10:16.216 } 00:10:16.216 }' 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:16.216 pt2 00:10:16.216 pt3' 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.216 17:54:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.216 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.216 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.216 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.216 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.216 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:16.216 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.216 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.216 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.216 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.477 [2024-11-26 17:54:58.095512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e1b2c406-3bff-487f-b4b1-a6b785a2cf3a '!=' e1b2c406-3bff-487f-b4b1-a6b785a2cf3a ']' 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.477 [2024-11-26 17:54:58.139196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.477 "name": "raid_bdev1", 00:10:16.477 "uuid": "e1b2c406-3bff-487f-b4b1-a6b785a2cf3a", 00:10:16.477 "strip_size_kb": 0, 00:10:16.477 "state": "online", 00:10:16.477 "raid_level": "raid1", 00:10:16.477 "superblock": true, 00:10:16.477 "num_base_bdevs": 3, 00:10:16.477 "num_base_bdevs_discovered": 2, 00:10:16.477 "num_base_bdevs_operational": 2, 00:10:16.477 "base_bdevs_list": [ 00:10:16.477 { 00:10:16.477 "name": null, 00:10:16.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.477 "is_configured": false, 00:10:16.477 "data_offset": 0, 00:10:16.477 "data_size": 63488 00:10:16.477 }, 00:10:16.477 { 00:10:16.477 "name": "pt2", 00:10:16.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.477 "is_configured": true, 00:10:16.477 "data_offset": 2048, 00:10:16.477 "data_size": 63488 00:10:16.477 }, 00:10:16.477 { 00:10:16.477 "name": "pt3", 00:10:16.477 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.477 "is_configured": true, 00:10:16.477 "data_offset": 2048, 00:10:16.477 "data_size": 63488 00:10:16.477 } 00:10:16.477 ] 00:10:16.477 }' 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.477 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.737 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:16.737 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.738 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.738 [2024-11-26 17:54:58.574358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:16.738 [2024-11-26 17:54:58.574485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.738 [2024-11-26 17:54:58.574614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.738 [2024-11-26 17:54:58.574703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.738 [2024-11-26 17:54:58.574762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:16.738 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.738 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.738 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.738 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.738 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:16.738 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.998 [2024-11-26 17:54:58.658217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.998 [2024-11-26 17:54:58.658410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.998 [2024-11-26 17:54:58.658437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:16.998 [2024-11-26 17:54:58.658449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.998 [2024-11-26 17:54:58.661126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.998 [2024-11-26 17:54:58.661185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.998 [2024-11-26 17:54:58.661299] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:16.998 [2024-11-26 17:54:58.661363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.998 pt2 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.998 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.998 "name": "raid_bdev1", 00:10:16.998 "uuid": "e1b2c406-3bff-487f-b4b1-a6b785a2cf3a", 00:10:16.998 "strip_size_kb": 0, 00:10:16.998 "state": "configuring", 00:10:16.998 "raid_level": "raid1", 00:10:16.998 "superblock": true, 00:10:16.999 "num_base_bdevs": 3, 00:10:16.999 "num_base_bdevs_discovered": 1, 00:10:16.999 "num_base_bdevs_operational": 2, 00:10:16.999 "base_bdevs_list": [ 00:10:16.999 { 00:10:16.999 "name": null, 00:10:16.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.999 "is_configured": false, 00:10:16.999 "data_offset": 2048, 00:10:16.999 "data_size": 63488 00:10:16.999 }, 00:10:16.999 { 00:10:16.999 "name": "pt2", 00:10:16.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.999 "is_configured": true, 00:10:16.999 "data_offset": 2048, 00:10:16.999 "data_size": 63488 00:10:16.999 }, 00:10:16.999 { 00:10:16.999 "name": null, 00:10:16.999 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.999 "is_configured": false, 00:10:16.999 "data_offset": 2048, 00:10:16.999 "data_size": 63488 00:10:16.999 } 00:10:16.999 ] 00:10:16.999 }' 00:10:16.999 17:54:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.999 17:54:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.290 [2024-11-26 17:54:59.129466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:17.290 [2024-11-26 17:54:59.129677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.290 [2024-11-26 17:54:59.129746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:17.290 [2024-11-26 17:54:59.129797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.290 [2024-11-26 17:54:59.130452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.290 [2024-11-26 17:54:59.130540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:17.290 [2024-11-26 17:54:59.130698] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:17.290 [2024-11-26 17:54:59.130764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:17.290 [2024-11-26 17:54:59.130952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:17.290 [2024-11-26 17:54:59.130998] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.290 [2024-11-26 17:54:59.131358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:17.290 [2024-11-26 17:54:59.131588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:17.290 [2024-11-26 17:54:59.131638] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:17.290 [2024-11-26 17:54:59.131841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.290 pt3 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.290 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.553 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.553 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.553 "name": "raid_bdev1", 00:10:17.553 "uuid": "e1b2c406-3bff-487f-b4b1-a6b785a2cf3a", 00:10:17.553 "strip_size_kb": 0, 00:10:17.553 "state": "online", 00:10:17.553 "raid_level": "raid1", 00:10:17.553 "superblock": true, 00:10:17.553 "num_base_bdevs": 3, 00:10:17.553 "num_base_bdevs_discovered": 2, 00:10:17.553 "num_base_bdevs_operational": 2, 00:10:17.553 "base_bdevs_list": [ 00:10:17.553 { 00:10:17.553 "name": null, 00:10:17.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.553 "is_configured": false, 00:10:17.553 "data_offset": 2048, 00:10:17.553 "data_size": 63488 00:10:17.553 }, 00:10:17.553 { 00:10:17.553 "name": "pt2", 00:10:17.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.553 "is_configured": true, 00:10:17.553 "data_offset": 2048, 00:10:17.553 "data_size": 63488 00:10:17.553 }, 00:10:17.553 { 00:10:17.553 "name": "pt3", 00:10:17.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.553 "is_configured": true, 00:10:17.553 "data_offset": 2048, 00:10:17.553 "data_size": 63488 00:10:17.553 } 00:10:17.553 ] 00:10:17.553 }' 00:10:17.553 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.553 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.813 [2024-11-26 17:54:59.605009] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.813 [2024-11-26 17:54:59.605076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.813 [2024-11-26 17:54:59.605186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.813 [2024-11-26 17:54:59.605263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.813 [2024-11-26 17:54:59.605276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.813 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.074 [2024-11-26 17:54:59.681108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:18.074 [2024-11-26 17:54:59.681211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.074 [2024-11-26 17:54:59.681236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:18.074 [2024-11-26 17:54:59.681246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.074 [2024-11-26 17:54:59.683926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.074 [2024-11-26 17:54:59.683993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:18.074 [2024-11-26 17:54:59.684131] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:18.074 [2024-11-26 17:54:59.684187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:18.074 [2024-11-26 17:54:59.684363] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:18.074 [2024-11-26 17:54:59.684376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.074 [2024-11-26 17:54:59.684396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:18.074 [2024-11-26 17:54:59.684471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.074 pt1 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.074 "name": "raid_bdev1", 00:10:18.074 "uuid": "e1b2c406-3bff-487f-b4b1-a6b785a2cf3a", 00:10:18.074 "strip_size_kb": 0, 00:10:18.074 "state": "configuring", 00:10:18.074 "raid_level": "raid1", 00:10:18.074 "superblock": true, 00:10:18.074 "num_base_bdevs": 3, 00:10:18.074 "num_base_bdevs_discovered": 1, 00:10:18.074 "num_base_bdevs_operational": 2, 00:10:18.074 "base_bdevs_list": [ 00:10:18.074 { 00:10:18.074 "name": null, 00:10:18.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.074 "is_configured": false, 00:10:18.074 "data_offset": 2048, 00:10:18.074 "data_size": 63488 00:10:18.074 }, 00:10:18.074 { 00:10:18.074 "name": "pt2", 00:10:18.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.074 "is_configured": true, 00:10:18.074 "data_offset": 2048, 00:10:18.074 "data_size": 63488 00:10:18.074 }, 00:10:18.074 { 00:10:18.074 "name": null, 00:10:18.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.074 "is_configured": false, 00:10:18.074 "data_offset": 2048, 00:10:18.074 "data_size": 63488 00:10:18.074 } 00:10:18.074 ] 00:10:18.074 }' 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.074 17:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.333 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:18.333 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:18.333 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.333 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.333 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.591 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:18.591 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:18.591 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.591 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.591 [2024-11-26 17:55:00.205064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:18.591 [2024-11-26 17:55:00.205266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.591 [2024-11-26 17:55:00.205318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:18.591 [2024-11-26 17:55:00.205362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.591 [2024-11-26 17:55:00.205938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.592 [2024-11-26 17:55:00.206030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:18.592 [2024-11-26 17:55:00.206191] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:18.592 [2024-11-26 17:55:00.206251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:18.592 [2024-11-26 17:55:00.206436] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:18.592 [2024-11-26 17:55:00.206478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:18.592 [2024-11-26 17:55:00.206790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:18.592 [2024-11-26 17:55:00.207016] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:18.592 [2024-11-26 17:55:00.207092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:18.592 [2024-11-26 17:55:00.207304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.592 pt3 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.592 "name": "raid_bdev1", 00:10:18.592 "uuid": "e1b2c406-3bff-487f-b4b1-a6b785a2cf3a", 00:10:18.592 "strip_size_kb": 0, 00:10:18.592 "state": "online", 00:10:18.592 "raid_level": "raid1", 00:10:18.592 "superblock": true, 00:10:18.592 "num_base_bdevs": 3, 00:10:18.592 "num_base_bdevs_discovered": 2, 00:10:18.592 "num_base_bdevs_operational": 2, 00:10:18.592 "base_bdevs_list": [ 00:10:18.592 { 00:10:18.592 "name": null, 00:10:18.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.592 "is_configured": false, 00:10:18.592 "data_offset": 2048, 00:10:18.592 "data_size": 63488 00:10:18.592 }, 00:10:18.592 { 00:10:18.592 "name": "pt2", 00:10:18.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.592 "is_configured": true, 00:10:18.592 "data_offset": 2048, 00:10:18.592 "data_size": 63488 00:10:18.592 }, 00:10:18.592 { 00:10:18.592 "name": "pt3", 00:10:18.592 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.592 "is_configured": true, 00:10:18.592 "data_offset": 2048, 00:10:18.592 "data_size": 63488 00:10:18.592 } 00:10:18.592 ] 00:10:18.592 }' 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.592 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.851 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:18.851 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:18.851 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.851 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.851 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.110 [2024-11-26 17:55:00.732745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e1b2c406-3bff-487f-b4b1-a6b785a2cf3a '!=' e1b2c406-3bff-487f-b4b1-a6b785a2cf3a ']' 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68873 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68873 ']' 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68873 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68873 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.110 killing process with pid 68873 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68873' 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68873 00:10:19.110 [2024-11-26 17:55:00.808572] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.110 [2024-11-26 17:55:00.808698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.110 17:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68873 00:10:19.110 [2024-11-26 17:55:00.808771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.110 [2024-11-26 17:55:00.808785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:19.369 [2024-11-26 17:55:01.165489] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.748 17:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:20.748 00:10:20.748 real 0m8.390s 00:10:20.748 user 0m13.106s 00:10:20.748 sys 0m1.443s 00:10:20.748 17:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.748 17:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.748 ************************************ 00:10:20.748 END TEST raid_superblock_test 00:10:20.748 ************************************ 00:10:20.748 17:55:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:20.748 17:55:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:20.748 17:55:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.748 17:55:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:20.748 ************************************ 00:10:20.748 START TEST raid_read_error_test 00:10:20.748 ************************************ 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yNhXzCBYxI 00:10:20.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69324 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69324 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69324 ']' 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.748 17:55:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.007 [2024-11-26 17:55:02.646832] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:10:21.007 [2024-11-26 17:55:02.647082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69324 ] 00:10:21.007 [2024-11-26 17:55:02.825562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.267 [2024-11-26 17:55:02.963874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.527 [2024-11-26 17:55:03.195790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.527 [2024-11-26 17:55:03.195845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.786 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.787 BaseBdev1_malloc 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.787 true 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.787 [2024-11-26 17:55:03.622059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:21.787 [2024-11-26 17:55:03.622250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.787 [2024-11-26 17:55:03.622305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:21.787 [2024-11-26 17:55:03.622345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.787 [2024-11-26 17:55:03.625065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.787 [2024-11-26 17:55:03.625181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:21.787 BaseBdev1 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.787 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.048 BaseBdev2_malloc 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.048 true 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.048 [2024-11-26 17:55:03.692403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:22.048 [2024-11-26 17:55:03.692570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.048 [2024-11-26 17:55:03.692613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:22.048 [2024-11-26 17:55:03.692627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.048 [2024-11-26 17:55:03.695273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.048 [2024-11-26 17:55:03.695329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:22.048 BaseBdev2 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.048 BaseBdev3_malloc 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.048 true 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.048 [2024-11-26 17:55:03.771952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:22.048 [2024-11-26 17:55:03.772161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.048 [2024-11-26 17:55:03.772246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:22.048 [2024-11-26 17:55:03.772293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.048 [2024-11-26 17:55:03.775014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.048 [2024-11-26 17:55:03.775092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:22.048 BaseBdev3 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.048 [2024-11-26 17:55:03.784083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.048 [2024-11-26 17:55:03.786461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.048 [2024-11-26 17:55:03.786650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.048 [2024-11-26 17:55:03.786975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:22.048 [2024-11-26 17:55:03.787044] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:22.048 [2024-11-26 17:55:03.787396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:22.048 [2024-11-26 17:55:03.787662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:22.048 [2024-11-26 17:55:03.787714] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:22.048 [2024-11-26 17:55:03.788043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.048 "name": "raid_bdev1", 00:10:22.048 "uuid": "6bd0584d-4866-468f-b480-c47cb57255fb", 00:10:22.048 "strip_size_kb": 0, 00:10:22.048 "state": "online", 00:10:22.048 "raid_level": "raid1", 00:10:22.048 "superblock": true, 00:10:22.048 "num_base_bdevs": 3, 00:10:22.048 "num_base_bdevs_discovered": 3, 00:10:22.048 "num_base_bdevs_operational": 3, 00:10:22.048 "base_bdevs_list": [ 00:10:22.048 { 00:10:22.048 "name": "BaseBdev1", 00:10:22.048 "uuid": "1c7cfdac-e340-54a2-bd19-c453ea0e1e3c", 00:10:22.048 "is_configured": true, 00:10:22.048 "data_offset": 2048, 00:10:22.048 "data_size": 63488 00:10:22.048 }, 00:10:22.048 { 00:10:22.048 "name": "BaseBdev2", 00:10:22.048 "uuid": "c04cf111-854b-58ce-b101-4a1c878c3d28", 00:10:22.048 "is_configured": true, 00:10:22.048 "data_offset": 2048, 00:10:22.048 "data_size": 63488 00:10:22.048 }, 00:10:22.048 { 00:10:22.048 "name": "BaseBdev3", 00:10:22.048 "uuid": "2873c68b-6beb-503a-bf75-c97f9365fda4", 00:10:22.048 "is_configured": true, 00:10:22.048 "data_offset": 2048, 00:10:22.048 "data_size": 63488 00:10:22.048 } 00:10:22.048 ] 00:10:22.048 }' 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.048 17:55:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.618 17:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:22.618 17:55:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:22.618 [2024-11-26 17:55:04.408673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:23.559 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:23.559 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.559 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.559 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.559 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:23.559 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:23.559 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:23.559 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.560 "name": "raid_bdev1", 00:10:23.560 "uuid": "6bd0584d-4866-468f-b480-c47cb57255fb", 00:10:23.560 "strip_size_kb": 0, 00:10:23.560 "state": "online", 00:10:23.560 "raid_level": "raid1", 00:10:23.560 "superblock": true, 00:10:23.560 "num_base_bdevs": 3, 00:10:23.560 "num_base_bdevs_discovered": 3, 00:10:23.560 "num_base_bdevs_operational": 3, 00:10:23.560 "base_bdevs_list": [ 00:10:23.560 { 00:10:23.560 "name": "BaseBdev1", 00:10:23.560 "uuid": "1c7cfdac-e340-54a2-bd19-c453ea0e1e3c", 00:10:23.560 "is_configured": true, 00:10:23.560 "data_offset": 2048, 00:10:23.560 "data_size": 63488 00:10:23.560 }, 00:10:23.560 { 00:10:23.560 "name": "BaseBdev2", 00:10:23.560 "uuid": "c04cf111-854b-58ce-b101-4a1c878c3d28", 00:10:23.560 "is_configured": true, 00:10:23.560 "data_offset": 2048, 00:10:23.560 "data_size": 63488 00:10:23.560 }, 00:10:23.560 { 00:10:23.560 "name": "BaseBdev3", 00:10:23.560 "uuid": "2873c68b-6beb-503a-bf75-c97f9365fda4", 00:10:23.560 "is_configured": true, 00:10:23.560 "data_offset": 2048, 00:10:23.560 "data_size": 63488 00:10:23.560 } 00:10:23.560 ] 00:10:23.560 }' 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.560 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.156 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.156 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.156 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.156 [2024-11-26 17:55:05.801749] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.156 [2024-11-26 17:55:05.801901] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.157 [2024-11-26 17:55:05.805429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.157 [2024-11-26 17:55:05.805591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.157 [2024-11-26 17:55:05.805815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.157 { 00:10:24.157 "results": [ 00:10:24.157 { 00:10:24.157 "job": "raid_bdev1", 00:10:24.157 "core_mask": "0x1", 00:10:24.157 "workload": "randrw", 00:10:24.157 "percentage": 50, 00:10:24.157 "status": "finished", 00:10:24.157 "queue_depth": 1, 00:10:24.157 "io_size": 131072, 00:10:24.157 "runtime": 1.393841, 00:10:24.157 "iops": 11450.373464405195, 00:10:24.157 "mibps": 1431.2966830506493, 00:10:24.157 "io_failed": 0, 00:10:24.157 "io_timeout": 0, 00:10:24.157 "avg_latency_us": 84.18983878911251, 00:10:24.157 "min_latency_us": 25.4882096069869, 00:10:24.157 "max_latency_us": 1695.6366812227075 00:10:24.157 } 00:10:24.157 ], 00:10:24.157 "core_count": 1 00:10:24.157 } 00:10:24.157 [2024-11-26 17:55:05.805879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:24.157 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.157 17:55:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69324 00:10:24.157 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69324 ']' 00:10:24.157 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69324 00:10:24.157 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:24.157 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.157 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69324 00:10:24.157 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.157 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.157 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69324' 00:10:24.157 killing process with pid 69324 00:10:24.157 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69324 00:10:24.157 [2024-11-26 17:55:05.844289] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.157 17:55:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69324 00:10:24.427 [2024-11-26 17:55:06.116258] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.816 17:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yNhXzCBYxI 00:10:25.816 17:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:25.816 17:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:25.816 17:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:25.816 17:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:25.816 17:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.816 17:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:25.816 17:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:25.816 00:10:25.816 real 0m4.950s 00:10:25.816 user 0m5.942s 00:10:25.816 sys 0m0.607s 00:10:25.816 17:55:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.816 17:55:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.816 ************************************ 00:10:25.816 END TEST raid_read_error_test 00:10:25.816 ************************************ 00:10:25.816 17:55:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:25.816 17:55:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.816 17:55:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.816 17:55:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.816 ************************************ 00:10:25.816 START TEST raid_write_error_test 00:10:25.816 ************************************ 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JRvEXMHoZq 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69470 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69470 00:10:25.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69470 ']' 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.816 17:55:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.816 [2024-11-26 17:55:07.672971] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:10:25.816 [2024-11-26 17:55:07.673154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69470 ] 00:10:26.076 [2024-11-26 17:55:07.842406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.334 [2024-11-26 17:55:07.972135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.334 [2024-11-26 17:55:08.192315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.334 [2024-11-26 17:55:08.192373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.902 BaseBdev1_malloc 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.902 true 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.902 [2024-11-26 17:55:08.662708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:26.902 [2024-11-26 17:55:08.662887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.902 [2024-11-26 17:55:08.662970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:26.902 [2024-11-26 17:55:08.663014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.902 [2024-11-26 17:55:08.665619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.902 [2024-11-26 17:55:08.665746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:26.902 BaseBdev1 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.902 BaseBdev2_malloc 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.902 true 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.902 [2024-11-26 17:55:08.733048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:26.902 [2024-11-26 17:55:08.733242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.902 [2024-11-26 17:55:08.733351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:26.902 [2024-11-26 17:55:08.733398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.902 [2024-11-26 17:55:08.736002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.902 [2024-11-26 17:55:08.736144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:26.902 BaseBdev2 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.902 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.162 BaseBdev3_malloc 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.162 true 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.162 [2024-11-26 17:55:08.824505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:27.162 [2024-11-26 17:55:08.824662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.162 [2024-11-26 17:55:08.824726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:27.162 [2024-11-26 17:55:08.824774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.162 [2024-11-26 17:55:08.827502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.162 [2024-11-26 17:55:08.827629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:27.162 BaseBdev3 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.162 [2024-11-26 17:55:08.836601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.162 [2024-11-26 17:55:08.838938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.162 [2024-11-26 17:55:08.839123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.162 [2024-11-26 17:55:08.839422] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:27.162 [2024-11-26 17:55:08.839478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:27.162 [2024-11-26 17:55:08.839826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:27.162 [2024-11-26 17:55:08.840084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:27.162 [2024-11-26 17:55:08.840135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:27.162 [2024-11-26 17:55:08.840461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.162 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.163 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.163 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.163 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.163 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.163 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.163 "name": "raid_bdev1", 00:10:27.163 "uuid": "f6de2f4a-69ca-40bb-991b-365dcc9399ac", 00:10:27.163 "strip_size_kb": 0, 00:10:27.163 "state": "online", 00:10:27.163 "raid_level": "raid1", 00:10:27.163 "superblock": true, 00:10:27.163 "num_base_bdevs": 3, 00:10:27.163 "num_base_bdevs_discovered": 3, 00:10:27.163 "num_base_bdevs_operational": 3, 00:10:27.163 "base_bdevs_list": [ 00:10:27.163 { 00:10:27.163 "name": "BaseBdev1", 00:10:27.163 "uuid": "88f5875d-a414-5778-85fd-7e415f317791", 00:10:27.163 "is_configured": true, 00:10:27.163 "data_offset": 2048, 00:10:27.163 "data_size": 63488 00:10:27.163 }, 00:10:27.163 { 00:10:27.163 "name": "BaseBdev2", 00:10:27.163 "uuid": "960891ae-729f-5667-aeec-33cc24840017", 00:10:27.163 "is_configured": true, 00:10:27.163 "data_offset": 2048, 00:10:27.163 "data_size": 63488 00:10:27.163 }, 00:10:27.163 { 00:10:27.163 "name": "BaseBdev3", 00:10:27.163 "uuid": "a65629b1-0848-50fa-8dae-b468141e3cff", 00:10:27.163 "is_configured": true, 00:10:27.163 "data_offset": 2048, 00:10:27.163 "data_size": 63488 00:10:27.163 } 00:10:27.163 ] 00:10:27.163 }' 00:10:27.163 17:55:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.163 17:55:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.731 17:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:27.731 17:55:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:27.731 [2024-11-26 17:55:09.401113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.669 [2024-11-26 17:55:10.297297] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:28.669 [2024-11-26 17:55:10.297482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.669 [2024-11-26 17:55:10.297748] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.669 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.669 "name": "raid_bdev1", 00:10:28.669 "uuid": "f6de2f4a-69ca-40bb-991b-365dcc9399ac", 00:10:28.669 "strip_size_kb": 0, 00:10:28.669 "state": "online", 00:10:28.669 "raid_level": "raid1", 00:10:28.669 "superblock": true, 00:10:28.669 "num_base_bdevs": 3, 00:10:28.669 "num_base_bdevs_discovered": 2, 00:10:28.669 "num_base_bdevs_operational": 2, 00:10:28.669 "base_bdevs_list": [ 00:10:28.669 { 00:10:28.669 "name": null, 00:10:28.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.670 "is_configured": false, 00:10:28.670 "data_offset": 0, 00:10:28.670 "data_size": 63488 00:10:28.670 }, 00:10:28.670 { 00:10:28.670 "name": "BaseBdev2", 00:10:28.670 "uuid": "960891ae-729f-5667-aeec-33cc24840017", 00:10:28.670 "is_configured": true, 00:10:28.670 "data_offset": 2048, 00:10:28.670 "data_size": 63488 00:10:28.670 }, 00:10:28.670 { 00:10:28.670 "name": "BaseBdev3", 00:10:28.670 "uuid": "a65629b1-0848-50fa-8dae-b468141e3cff", 00:10:28.670 "is_configured": true, 00:10:28.670 "data_offset": 2048, 00:10:28.670 "data_size": 63488 00:10:28.670 } 00:10:28.670 ] 00:10:28.670 }' 00:10:28.670 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.670 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.929 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:28.929 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.929 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.929 [2024-11-26 17:55:10.764296] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.929 [2024-11-26 17:55:10.764441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.929 [2024-11-26 17:55:10.767716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.929 [2024-11-26 17:55:10.767817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.930 [2024-11-26 17:55:10.767910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.930 [2024-11-26 17:55:10.767929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:28.930 { 00:10:28.930 "results": [ 00:10:28.930 { 00:10:28.930 "job": "raid_bdev1", 00:10:28.930 "core_mask": "0x1", 00:10:28.930 "workload": "randrw", 00:10:28.930 "percentage": 50, 00:10:28.930 "status": "finished", 00:10:28.930 "queue_depth": 1, 00:10:28.930 "io_size": 131072, 00:10:28.930 "runtime": 1.363755, 00:10:28.930 "iops": 12456.782926552056, 00:10:28.930 "mibps": 1557.097865819007, 00:10:28.930 "io_failed": 0, 00:10:28.930 "io_timeout": 0, 00:10:28.930 "avg_latency_us": 77.09470666681747, 00:10:28.930 "min_latency_us": 25.7117903930131, 00:10:28.930 "max_latency_us": 1724.2550218340612 00:10:28.930 } 00:10:28.930 ], 00:10:28.930 "core_count": 1 00:10:28.930 } 00:10:28.930 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.930 17:55:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69470 00:10:28.930 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69470 ']' 00:10:28.930 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69470 00:10:28.930 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:28.930 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.930 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69470 00:10:29.189 killing process with pid 69470 00:10:29.189 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.189 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.189 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69470' 00:10:29.189 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69470 00:10:29.189 [2024-11-26 17:55:10.812936] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.189 17:55:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69470 00:10:29.448 [2024-11-26 17:55:11.077919] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.847 17:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:30.847 17:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:30.847 17:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JRvEXMHoZq 00:10:30.847 17:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:30.847 17:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:30.847 ************************************ 00:10:30.847 END TEST raid_write_error_test 00:10:30.847 ************************************ 00:10:30.847 17:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.847 17:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:30.847 17:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:30.847 00:10:30.847 real 0m4.876s 00:10:30.847 user 0m5.759s 00:10:30.847 sys 0m0.641s 00:10:30.847 17:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.847 17:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.847 17:55:12 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:30.847 17:55:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:30.847 17:55:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:30.847 17:55:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:30.847 17:55:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.847 17:55:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.847 ************************************ 00:10:30.847 START TEST raid_state_function_test 00:10:30.847 ************************************ 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69619 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69619' 00:10:30.847 Process raid pid: 69619 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69619 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69619 ']' 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.847 17:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.847 [2024-11-26 17:55:12.625277] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:10:30.847 [2024-11-26 17:55:12.625441] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.108 [2024-11-26 17:55:12.807967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.108 [2024-11-26 17:55:12.942576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.368 [2024-11-26 17:55:13.178105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.368 [2024-11-26 17:55:13.178152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.937 17:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.937 17:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:31.937 17:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.937 17:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.937 17:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.938 [2024-11-26 17:55:13.541709] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.938 [2024-11-26 17:55:13.541885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.938 [2024-11-26 17:55:13.541943] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.938 [2024-11-26 17:55:13.541973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.938 [2024-11-26 17:55:13.541998] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.938 [2024-11-26 17:55:13.542038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.938 [2024-11-26 17:55:13.542063] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:31.938 [2024-11-26 17:55:13.542090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.938 "name": "Existed_Raid", 00:10:31.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.938 "strip_size_kb": 64, 00:10:31.938 "state": "configuring", 00:10:31.938 "raid_level": "raid0", 00:10:31.938 "superblock": false, 00:10:31.938 "num_base_bdevs": 4, 00:10:31.938 "num_base_bdevs_discovered": 0, 00:10:31.938 "num_base_bdevs_operational": 4, 00:10:31.938 "base_bdevs_list": [ 00:10:31.938 { 00:10:31.938 "name": "BaseBdev1", 00:10:31.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.938 "is_configured": false, 00:10:31.938 "data_offset": 0, 00:10:31.938 "data_size": 0 00:10:31.938 }, 00:10:31.938 { 00:10:31.938 "name": "BaseBdev2", 00:10:31.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.938 "is_configured": false, 00:10:31.938 "data_offset": 0, 00:10:31.938 "data_size": 0 00:10:31.938 }, 00:10:31.938 { 00:10:31.938 "name": "BaseBdev3", 00:10:31.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.938 "is_configured": false, 00:10:31.938 "data_offset": 0, 00:10:31.938 "data_size": 0 00:10:31.938 }, 00:10:31.938 { 00:10:31.938 "name": "BaseBdev4", 00:10:31.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.938 "is_configured": false, 00:10:31.938 "data_offset": 0, 00:10:31.938 "data_size": 0 00:10:31.938 } 00:10:31.938 ] 00:10:31.938 }' 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.938 17:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.198 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.198 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.198 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.198 [2024-11-26 17:55:14.013084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.198 [2024-11-26 17:55:14.013224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:32.198 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.198 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:32.198 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.198 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.198 [2024-11-26 17:55:14.025114] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:32.198 [2024-11-26 17:55:14.025254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:32.198 [2024-11-26 17:55:14.025295] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.198 [2024-11-26 17:55:14.025325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.198 [2024-11-26 17:55:14.025378] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.198 [2024-11-26 17:55:14.025406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.198 [2024-11-26 17:55:14.025440] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:32.198 [2024-11-26 17:55:14.025480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:32.198 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.198 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:32.198 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.198 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.458 [2024-11-26 17:55:14.077550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.458 BaseBdev1 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.458 [ 00:10:32.458 { 00:10:32.458 "name": "BaseBdev1", 00:10:32.458 "aliases": [ 00:10:32.458 "f986a0ec-119f-4d97-a965-5a29d385d7b2" 00:10:32.458 ], 00:10:32.458 "product_name": "Malloc disk", 00:10:32.458 "block_size": 512, 00:10:32.458 "num_blocks": 65536, 00:10:32.458 "uuid": "f986a0ec-119f-4d97-a965-5a29d385d7b2", 00:10:32.458 "assigned_rate_limits": { 00:10:32.458 "rw_ios_per_sec": 0, 00:10:32.458 "rw_mbytes_per_sec": 0, 00:10:32.458 "r_mbytes_per_sec": 0, 00:10:32.458 "w_mbytes_per_sec": 0 00:10:32.458 }, 00:10:32.458 "claimed": true, 00:10:32.458 "claim_type": "exclusive_write", 00:10:32.458 "zoned": false, 00:10:32.458 "supported_io_types": { 00:10:32.458 "read": true, 00:10:32.458 "write": true, 00:10:32.458 "unmap": true, 00:10:32.458 "flush": true, 00:10:32.458 "reset": true, 00:10:32.458 "nvme_admin": false, 00:10:32.458 "nvme_io": false, 00:10:32.458 "nvme_io_md": false, 00:10:32.458 "write_zeroes": true, 00:10:32.458 "zcopy": true, 00:10:32.458 "get_zone_info": false, 00:10:32.458 "zone_management": false, 00:10:32.458 "zone_append": false, 00:10:32.458 "compare": false, 00:10:32.458 "compare_and_write": false, 00:10:32.458 "abort": true, 00:10:32.458 "seek_hole": false, 00:10:32.458 "seek_data": false, 00:10:32.458 "copy": true, 00:10:32.458 "nvme_iov_md": false 00:10:32.458 }, 00:10:32.458 "memory_domains": [ 00:10:32.458 { 00:10:32.458 "dma_device_id": "system", 00:10:32.458 "dma_device_type": 1 00:10:32.458 }, 00:10:32.458 { 00:10:32.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.458 "dma_device_type": 2 00:10:32.458 } 00:10:32.458 ], 00:10:32.458 "driver_specific": {} 00:10:32.458 } 00:10:32.458 ] 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.458 "name": "Existed_Raid", 00:10:32.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.458 "strip_size_kb": 64, 00:10:32.458 "state": "configuring", 00:10:32.458 "raid_level": "raid0", 00:10:32.458 "superblock": false, 00:10:32.458 "num_base_bdevs": 4, 00:10:32.458 "num_base_bdevs_discovered": 1, 00:10:32.458 "num_base_bdevs_operational": 4, 00:10:32.458 "base_bdevs_list": [ 00:10:32.458 { 00:10:32.458 "name": "BaseBdev1", 00:10:32.458 "uuid": "f986a0ec-119f-4d97-a965-5a29d385d7b2", 00:10:32.458 "is_configured": true, 00:10:32.458 "data_offset": 0, 00:10:32.458 "data_size": 65536 00:10:32.458 }, 00:10:32.458 { 00:10:32.458 "name": "BaseBdev2", 00:10:32.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.458 "is_configured": false, 00:10:32.458 "data_offset": 0, 00:10:32.458 "data_size": 0 00:10:32.458 }, 00:10:32.458 { 00:10:32.458 "name": "BaseBdev3", 00:10:32.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.458 "is_configured": false, 00:10:32.458 "data_offset": 0, 00:10:32.458 "data_size": 0 00:10:32.458 }, 00:10:32.458 { 00:10:32.458 "name": "BaseBdev4", 00:10:32.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.458 "is_configured": false, 00:10:32.458 "data_offset": 0, 00:10:32.458 "data_size": 0 00:10:32.458 } 00:10:32.458 ] 00:10:32.458 }' 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.458 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.719 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.719 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.719 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.719 [2024-11-26 17:55:14.569048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.719 [2024-11-26 17:55:14.569198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:32.719 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.719 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:32.719 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.719 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.978 [2024-11-26 17:55:14.581128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.978 [2024-11-26 17:55:14.583294] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.978 [2024-11-26 17:55:14.583419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.978 [2024-11-26 17:55:14.583457] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.978 [2024-11-26 17:55:14.583487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.978 [2024-11-26 17:55:14.583510] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:32.978 [2024-11-26 17:55:14.583582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:32.978 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.978 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:32.978 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.978 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.978 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.978 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.978 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.978 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.978 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.978 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.978 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.978 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.978 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.979 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.979 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.979 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.979 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.979 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.979 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.979 "name": "Existed_Raid", 00:10:32.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.979 "strip_size_kb": 64, 00:10:32.979 "state": "configuring", 00:10:32.979 "raid_level": "raid0", 00:10:32.979 "superblock": false, 00:10:32.979 "num_base_bdevs": 4, 00:10:32.979 "num_base_bdevs_discovered": 1, 00:10:32.979 "num_base_bdevs_operational": 4, 00:10:32.979 "base_bdevs_list": [ 00:10:32.979 { 00:10:32.979 "name": "BaseBdev1", 00:10:32.979 "uuid": "f986a0ec-119f-4d97-a965-5a29d385d7b2", 00:10:32.979 "is_configured": true, 00:10:32.979 "data_offset": 0, 00:10:32.979 "data_size": 65536 00:10:32.979 }, 00:10:32.979 { 00:10:32.979 "name": "BaseBdev2", 00:10:32.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.979 "is_configured": false, 00:10:32.979 "data_offset": 0, 00:10:32.979 "data_size": 0 00:10:32.979 }, 00:10:32.979 { 00:10:32.979 "name": "BaseBdev3", 00:10:32.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.979 "is_configured": false, 00:10:32.979 "data_offset": 0, 00:10:32.979 "data_size": 0 00:10:32.979 }, 00:10:32.979 { 00:10:32.979 "name": "BaseBdev4", 00:10:32.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.979 "is_configured": false, 00:10:32.979 "data_offset": 0, 00:10:32.979 "data_size": 0 00:10:32.979 } 00:10:32.979 ] 00:10:32.979 }' 00:10:32.979 17:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.979 17:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.238 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:33.238 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.238 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.238 [2024-11-26 17:55:15.099571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.498 BaseBdev2 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.498 [ 00:10:33.498 { 00:10:33.498 "name": "BaseBdev2", 00:10:33.498 "aliases": [ 00:10:33.498 "4f1ac4cf-1557-4b51-aa4b-3d622a17e479" 00:10:33.498 ], 00:10:33.498 "product_name": "Malloc disk", 00:10:33.498 "block_size": 512, 00:10:33.498 "num_blocks": 65536, 00:10:33.498 "uuid": "4f1ac4cf-1557-4b51-aa4b-3d622a17e479", 00:10:33.498 "assigned_rate_limits": { 00:10:33.498 "rw_ios_per_sec": 0, 00:10:33.498 "rw_mbytes_per_sec": 0, 00:10:33.498 "r_mbytes_per_sec": 0, 00:10:33.498 "w_mbytes_per_sec": 0 00:10:33.498 }, 00:10:33.498 "claimed": true, 00:10:33.498 "claim_type": "exclusive_write", 00:10:33.498 "zoned": false, 00:10:33.498 "supported_io_types": { 00:10:33.498 "read": true, 00:10:33.498 "write": true, 00:10:33.498 "unmap": true, 00:10:33.498 "flush": true, 00:10:33.498 "reset": true, 00:10:33.498 "nvme_admin": false, 00:10:33.498 "nvme_io": false, 00:10:33.498 "nvme_io_md": false, 00:10:33.498 "write_zeroes": true, 00:10:33.498 "zcopy": true, 00:10:33.498 "get_zone_info": false, 00:10:33.498 "zone_management": false, 00:10:33.498 "zone_append": false, 00:10:33.498 "compare": false, 00:10:33.498 "compare_and_write": false, 00:10:33.498 "abort": true, 00:10:33.498 "seek_hole": false, 00:10:33.498 "seek_data": false, 00:10:33.498 "copy": true, 00:10:33.498 "nvme_iov_md": false 00:10:33.498 }, 00:10:33.498 "memory_domains": [ 00:10:33.498 { 00:10:33.498 "dma_device_id": "system", 00:10:33.498 "dma_device_type": 1 00:10:33.498 }, 00:10:33.498 { 00:10:33.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.498 "dma_device_type": 2 00:10:33.498 } 00:10:33.498 ], 00:10:33.498 "driver_specific": {} 00:10:33.498 } 00:10:33.498 ] 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.498 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.498 "name": "Existed_Raid", 00:10:33.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.498 "strip_size_kb": 64, 00:10:33.498 "state": "configuring", 00:10:33.498 "raid_level": "raid0", 00:10:33.498 "superblock": false, 00:10:33.498 "num_base_bdevs": 4, 00:10:33.498 "num_base_bdevs_discovered": 2, 00:10:33.498 "num_base_bdevs_operational": 4, 00:10:33.498 "base_bdevs_list": [ 00:10:33.498 { 00:10:33.498 "name": "BaseBdev1", 00:10:33.498 "uuid": "f986a0ec-119f-4d97-a965-5a29d385d7b2", 00:10:33.498 "is_configured": true, 00:10:33.498 "data_offset": 0, 00:10:33.498 "data_size": 65536 00:10:33.498 }, 00:10:33.498 { 00:10:33.498 "name": "BaseBdev2", 00:10:33.498 "uuid": "4f1ac4cf-1557-4b51-aa4b-3d622a17e479", 00:10:33.498 "is_configured": true, 00:10:33.498 "data_offset": 0, 00:10:33.498 "data_size": 65536 00:10:33.498 }, 00:10:33.499 { 00:10:33.499 "name": "BaseBdev3", 00:10:33.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.499 "is_configured": false, 00:10:33.499 "data_offset": 0, 00:10:33.499 "data_size": 0 00:10:33.499 }, 00:10:33.499 { 00:10:33.499 "name": "BaseBdev4", 00:10:33.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.499 "is_configured": false, 00:10:33.499 "data_offset": 0, 00:10:33.499 "data_size": 0 00:10:33.499 } 00:10:33.499 ] 00:10:33.499 }' 00:10:33.499 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.499 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.068 [2024-11-26 17:55:15.684425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.068 BaseBdev3 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.068 [ 00:10:34.068 { 00:10:34.068 "name": "BaseBdev3", 00:10:34.068 "aliases": [ 00:10:34.068 "eeefad6a-7430-4e33-8e3e-6839ada3e91c" 00:10:34.068 ], 00:10:34.068 "product_name": "Malloc disk", 00:10:34.068 "block_size": 512, 00:10:34.068 "num_blocks": 65536, 00:10:34.068 "uuid": "eeefad6a-7430-4e33-8e3e-6839ada3e91c", 00:10:34.068 "assigned_rate_limits": { 00:10:34.068 "rw_ios_per_sec": 0, 00:10:34.068 "rw_mbytes_per_sec": 0, 00:10:34.068 "r_mbytes_per_sec": 0, 00:10:34.068 "w_mbytes_per_sec": 0 00:10:34.068 }, 00:10:34.068 "claimed": true, 00:10:34.068 "claim_type": "exclusive_write", 00:10:34.068 "zoned": false, 00:10:34.068 "supported_io_types": { 00:10:34.068 "read": true, 00:10:34.068 "write": true, 00:10:34.068 "unmap": true, 00:10:34.068 "flush": true, 00:10:34.068 "reset": true, 00:10:34.068 "nvme_admin": false, 00:10:34.068 "nvme_io": false, 00:10:34.068 "nvme_io_md": false, 00:10:34.068 "write_zeroes": true, 00:10:34.068 "zcopy": true, 00:10:34.068 "get_zone_info": false, 00:10:34.068 "zone_management": false, 00:10:34.068 "zone_append": false, 00:10:34.068 "compare": false, 00:10:34.068 "compare_and_write": false, 00:10:34.068 "abort": true, 00:10:34.068 "seek_hole": false, 00:10:34.068 "seek_data": false, 00:10:34.068 "copy": true, 00:10:34.068 "nvme_iov_md": false 00:10:34.068 }, 00:10:34.068 "memory_domains": [ 00:10:34.068 { 00:10:34.068 "dma_device_id": "system", 00:10:34.068 "dma_device_type": 1 00:10:34.068 }, 00:10:34.068 { 00:10:34.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.068 "dma_device_type": 2 00:10:34.068 } 00:10:34.068 ], 00:10:34.068 "driver_specific": {} 00:10:34.068 } 00:10:34.068 ] 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.068 "name": "Existed_Raid", 00:10:34.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.068 "strip_size_kb": 64, 00:10:34.068 "state": "configuring", 00:10:34.068 "raid_level": "raid0", 00:10:34.068 "superblock": false, 00:10:34.068 "num_base_bdevs": 4, 00:10:34.068 "num_base_bdevs_discovered": 3, 00:10:34.068 "num_base_bdevs_operational": 4, 00:10:34.068 "base_bdevs_list": [ 00:10:34.068 { 00:10:34.068 "name": "BaseBdev1", 00:10:34.068 "uuid": "f986a0ec-119f-4d97-a965-5a29d385d7b2", 00:10:34.068 "is_configured": true, 00:10:34.068 "data_offset": 0, 00:10:34.068 "data_size": 65536 00:10:34.068 }, 00:10:34.068 { 00:10:34.068 "name": "BaseBdev2", 00:10:34.068 "uuid": "4f1ac4cf-1557-4b51-aa4b-3d622a17e479", 00:10:34.068 "is_configured": true, 00:10:34.068 "data_offset": 0, 00:10:34.068 "data_size": 65536 00:10:34.068 }, 00:10:34.068 { 00:10:34.068 "name": "BaseBdev3", 00:10:34.068 "uuid": "eeefad6a-7430-4e33-8e3e-6839ada3e91c", 00:10:34.068 "is_configured": true, 00:10:34.068 "data_offset": 0, 00:10:34.068 "data_size": 65536 00:10:34.068 }, 00:10:34.068 { 00:10:34.068 "name": "BaseBdev4", 00:10:34.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.068 "is_configured": false, 00:10:34.068 "data_offset": 0, 00:10:34.068 "data_size": 0 00:10:34.068 } 00:10:34.068 ] 00:10:34.068 }' 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.068 17:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.328 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:34.328 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.328 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.589 [2024-11-26 17:55:16.231283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:34.589 [2024-11-26 17:55:16.231445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:34.589 [2024-11-26 17:55:16.231478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:34.589 [2024-11-26 17:55:16.231828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:34.589 [2024-11-26 17:55:16.232088] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:34.589 [2024-11-26 17:55:16.232141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:34.589 [2024-11-26 17:55:16.232472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.589 BaseBdev4 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.589 [ 00:10:34.589 { 00:10:34.589 "name": "BaseBdev4", 00:10:34.589 "aliases": [ 00:10:34.589 "acfa16a9-1fda-4014-b366-4fe4375a7c5e" 00:10:34.589 ], 00:10:34.589 "product_name": "Malloc disk", 00:10:34.589 "block_size": 512, 00:10:34.589 "num_blocks": 65536, 00:10:34.589 "uuid": "acfa16a9-1fda-4014-b366-4fe4375a7c5e", 00:10:34.589 "assigned_rate_limits": { 00:10:34.589 "rw_ios_per_sec": 0, 00:10:34.589 "rw_mbytes_per_sec": 0, 00:10:34.589 "r_mbytes_per_sec": 0, 00:10:34.589 "w_mbytes_per_sec": 0 00:10:34.589 }, 00:10:34.589 "claimed": true, 00:10:34.589 "claim_type": "exclusive_write", 00:10:34.589 "zoned": false, 00:10:34.589 "supported_io_types": { 00:10:34.589 "read": true, 00:10:34.589 "write": true, 00:10:34.589 "unmap": true, 00:10:34.589 "flush": true, 00:10:34.589 "reset": true, 00:10:34.589 "nvme_admin": false, 00:10:34.589 "nvme_io": false, 00:10:34.589 "nvme_io_md": false, 00:10:34.589 "write_zeroes": true, 00:10:34.589 "zcopy": true, 00:10:34.589 "get_zone_info": false, 00:10:34.589 "zone_management": false, 00:10:34.589 "zone_append": false, 00:10:34.589 "compare": false, 00:10:34.589 "compare_and_write": false, 00:10:34.589 "abort": true, 00:10:34.589 "seek_hole": false, 00:10:34.589 "seek_data": false, 00:10:34.589 "copy": true, 00:10:34.589 "nvme_iov_md": false 00:10:34.589 }, 00:10:34.589 "memory_domains": [ 00:10:34.589 { 00:10:34.589 "dma_device_id": "system", 00:10:34.589 "dma_device_type": 1 00:10:34.589 }, 00:10:34.589 { 00:10:34.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.589 "dma_device_type": 2 00:10:34.589 } 00:10:34.589 ], 00:10:34.589 "driver_specific": {} 00:10:34.589 } 00:10:34.589 ] 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.589 "name": "Existed_Raid", 00:10:34.589 "uuid": "66421417-5aea-4dfc-bd44-f0498bcd06c1", 00:10:34.589 "strip_size_kb": 64, 00:10:34.589 "state": "online", 00:10:34.589 "raid_level": "raid0", 00:10:34.589 "superblock": false, 00:10:34.589 "num_base_bdevs": 4, 00:10:34.589 "num_base_bdevs_discovered": 4, 00:10:34.589 "num_base_bdevs_operational": 4, 00:10:34.589 "base_bdevs_list": [ 00:10:34.589 { 00:10:34.589 "name": "BaseBdev1", 00:10:34.589 "uuid": "f986a0ec-119f-4d97-a965-5a29d385d7b2", 00:10:34.589 "is_configured": true, 00:10:34.589 "data_offset": 0, 00:10:34.589 "data_size": 65536 00:10:34.589 }, 00:10:34.589 { 00:10:34.589 "name": "BaseBdev2", 00:10:34.589 "uuid": "4f1ac4cf-1557-4b51-aa4b-3d622a17e479", 00:10:34.589 "is_configured": true, 00:10:34.589 "data_offset": 0, 00:10:34.589 "data_size": 65536 00:10:34.589 }, 00:10:34.589 { 00:10:34.589 "name": "BaseBdev3", 00:10:34.589 "uuid": "eeefad6a-7430-4e33-8e3e-6839ada3e91c", 00:10:34.589 "is_configured": true, 00:10:34.589 "data_offset": 0, 00:10:34.589 "data_size": 65536 00:10:34.589 }, 00:10:34.589 { 00:10:34.589 "name": "BaseBdev4", 00:10:34.589 "uuid": "acfa16a9-1fda-4014-b366-4fe4375a7c5e", 00:10:34.589 "is_configured": true, 00:10:34.589 "data_offset": 0, 00:10:34.589 "data_size": 65536 00:10:34.589 } 00:10:34.589 ] 00:10:34.589 }' 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.589 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.849 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.849 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.849 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.849 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.849 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.849 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.849 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.849 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.849 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.849 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.849 [2024-11-26 17:55:16.703041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.109 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.109 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:35.109 "name": "Existed_Raid", 00:10:35.109 "aliases": [ 00:10:35.109 "66421417-5aea-4dfc-bd44-f0498bcd06c1" 00:10:35.109 ], 00:10:35.109 "product_name": "Raid Volume", 00:10:35.109 "block_size": 512, 00:10:35.109 "num_blocks": 262144, 00:10:35.109 "uuid": "66421417-5aea-4dfc-bd44-f0498bcd06c1", 00:10:35.109 "assigned_rate_limits": { 00:10:35.109 "rw_ios_per_sec": 0, 00:10:35.109 "rw_mbytes_per_sec": 0, 00:10:35.109 "r_mbytes_per_sec": 0, 00:10:35.109 "w_mbytes_per_sec": 0 00:10:35.109 }, 00:10:35.109 "claimed": false, 00:10:35.109 "zoned": false, 00:10:35.109 "supported_io_types": { 00:10:35.109 "read": true, 00:10:35.109 "write": true, 00:10:35.109 "unmap": true, 00:10:35.109 "flush": true, 00:10:35.109 "reset": true, 00:10:35.109 "nvme_admin": false, 00:10:35.109 "nvme_io": false, 00:10:35.109 "nvme_io_md": false, 00:10:35.109 "write_zeroes": true, 00:10:35.109 "zcopy": false, 00:10:35.109 "get_zone_info": false, 00:10:35.109 "zone_management": false, 00:10:35.109 "zone_append": false, 00:10:35.109 "compare": false, 00:10:35.109 "compare_and_write": false, 00:10:35.109 "abort": false, 00:10:35.109 "seek_hole": false, 00:10:35.109 "seek_data": false, 00:10:35.109 "copy": false, 00:10:35.109 "nvme_iov_md": false 00:10:35.109 }, 00:10:35.109 "memory_domains": [ 00:10:35.109 { 00:10:35.109 "dma_device_id": "system", 00:10:35.109 "dma_device_type": 1 00:10:35.109 }, 00:10:35.109 { 00:10:35.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.110 "dma_device_type": 2 00:10:35.110 }, 00:10:35.110 { 00:10:35.110 "dma_device_id": "system", 00:10:35.110 "dma_device_type": 1 00:10:35.110 }, 00:10:35.110 { 00:10:35.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.110 "dma_device_type": 2 00:10:35.110 }, 00:10:35.110 { 00:10:35.110 "dma_device_id": "system", 00:10:35.110 "dma_device_type": 1 00:10:35.110 }, 00:10:35.110 { 00:10:35.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.110 "dma_device_type": 2 00:10:35.110 }, 00:10:35.110 { 00:10:35.110 "dma_device_id": "system", 00:10:35.110 "dma_device_type": 1 00:10:35.110 }, 00:10:35.110 { 00:10:35.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.110 "dma_device_type": 2 00:10:35.110 } 00:10:35.110 ], 00:10:35.110 "driver_specific": { 00:10:35.110 "raid": { 00:10:35.110 "uuid": "66421417-5aea-4dfc-bd44-f0498bcd06c1", 00:10:35.110 "strip_size_kb": 64, 00:10:35.110 "state": "online", 00:10:35.110 "raid_level": "raid0", 00:10:35.110 "superblock": false, 00:10:35.110 "num_base_bdevs": 4, 00:10:35.110 "num_base_bdevs_discovered": 4, 00:10:35.110 "num_base_bdevs_operational": 4, 00:10:35.110 "base_bdevs_list": [ 00:10:35.110 { 00:10:35.110 "name": "BaseBdev1", 00:10:35.110 "uuid": "f986a0ec-119f-4d97-a965-5a29d385d7b2", 00:10:35.110 "is_configured": true, 00:10:35.110 "data_offset": 0, 00:10:35.110 "data_size": 65536 00:10:35.110 }, 00:10:35.110 { 00:10:35.110 "name": "BaseBdev2", 00:10:35.110 "uuid": "4f1ac4cf-1557-4b51-aa4b-3d622a17e479", 00:10:35.110 "is_configured": true, 00:10:35.110 "data_offset": 0, 00:10:35.110 "data_size": 65536 00:10:35.110 }, 00:10:35.110 { 00:10:35.110 "name": "BaseBdev3", 00:10:35.110 "uuid": "eeefad6a-7430-4e33-8e3e-6839ada3e91c", 00:10:35.110 "is_configured": true, 00:10:35.110 "data_offset": 0, 00:10:35.110 "data_size": 65536 00:10:35.110 }, 00:10:35.110 { 00:10:35.110 "name": "BaseBdev4", 00:10:35.110 "uuid": "acfa16a9-1fda-4014-b366-4fe4375a7c5e", 00:10:35.110 "is_configured": true, 00:10:35.110 "data_offset": 0, 00:10:35.110 "data_size": 65536 00:10:35.110 } 00:10:35.110 ] 00:10:35.110 } 00:10:35.110 } 00:10:35.110 }' 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:35.110 BaseBdev2 00:10:35.110 BaseBdev3 00:10:35.110 BaseBdev4' 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.110 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.371 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.371 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.371 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.371 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:35.371 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.371 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.371 17:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.371 17:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.371 [2024-11-26 17:55:17.038227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:35.371 [2024-11-26 17:55:17.038361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.371 [2024-11-26 17:55:17.038446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.371 "name": "Existed_Raid", 00:10:35.371 "uuid": "66421417-5aea-4dfc-bd44-f0498bcd06c1", 00:10:35.371 "strip_size_kb": 64, 00:10:35.371 "state": "offline", 00:10:35.371 "raid_level": "raid0", 00:10:35.371 "superblock": false, 00:10:35.371 "num_base_bdevs": 4, 00:10:35.371 "num_base_bdevs_discovered": 3, 00:10:35.371 "num_base_bdevs_operational": 3, 00:10:35.371 "base_bdevs_list": [ 00:10:35.371 { 00:10:35.371 "name": null, 00:10:35.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.371 "is_configured": false, 00:10:35.371 "data_offset": 0, 00:10:35.371 "data_size": 65536 00:10:35.371 }, 00:10:35.371 { 00:10:35.371 "name": "BaseBdev2", 00:10:35.371 "uuid": "4f1ac4cf-1557-4b51-aa4b-3d622a17e479", 00:10:35.371 "is_configured": true, 00:10:35.371 "data_offset": 0, 00:10:35.371 "data_size": 65536 00:10:35.371 }, 00:10:35.371 { 00:10:35.371 "name": "BaseBdev3", 00:10:35.371 "uuid": "eeefad6a-7430-4e33-8e3e-6839ada3e91c", 00:10:35.371 "is_configured": true, 00:10:35.371 "data_offset": 0, 00:10:35.371 "data_size": 65536 00:10:35.371 }, 00:10:35.371 { 00:10:35.371 "name": "BaseBdev4", 00:10:35.371 "uuid": "acfa16a9-1fda-4014-b366-4fe4375a7c5e", 00:10:35.371 "is_configured": true, 00:10:35.371 "data_offset": 0, 00:10:35.371 "data_size": 65536 00:10:35.371 } 00:10:35.371 ] 00:10:35.371 }' 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.371 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.943 [2024-11-26 17:55:17.659193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.943 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.203 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:36.203 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.203 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:36.203 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.203 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.203 [2024-11-26 17:55:17.831555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:36.203 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.204 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:36.204 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.204 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.204 17:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:36.204 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.204 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.204 17:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.204 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:36.204 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.204 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:36.204 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.204 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.204 [2024-11-26 17:55:18.010304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:36.204 [2024-11-26 17:55:18.010469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.464 BaseBdev2 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.464 [ 00:10:36.464 { 00:10:36.464 "name": "BaseBdev2", 00:10:36.464 "aliases": [ 00:10:36.464 "f3a49f06-efdf-4cbe-a14b-99832520b20c" 00:10:36.464 ], 00:10:36.464 "product_name": "Malloc disk", 00:10:36.464 "block_size": 512, 00:10:36.464 "num_blocks": 65536, 00:10:36.464 "uuid": "f3a49f06-efdf-4cbe-a14b-99832520b20c", 00:10:36.464 "assigned_rate_limits": { 00:10:36.464 "rw_ios_per_sec": 0, 00:10:36.464 "rw_mbytes_per_sec": 0, 00:10:36.464 "r_mbytes_per_sec": 0, 00:10:36.464 "w_mbytes_per_sec": 0 00:10:36.464 }, 00:10:36.464 "claimed": false, 00:10:36.464 "zoned": false, 00:10:36.464 "supported_io_types": { 00:10:36.464 "read": true, 00:10:36.464 "write": true, 00:10:36.464 "unmap": true, 00:10:36.464 "flush": true, 00:10:36.464 "reset": true, 00:10:36.464 "nvme_admin": false, 00:10:36.464 "nvme_io": false, 00:10:36.464 "nvme_io_md": false, 00:10:36.464 "write_zeroes": true, 00:10:36.464 "zcopy": true, 00:10:36.464 "get_zone_info": false, 00:10:36.464 "zone_management": false, 00:10:36.464 "zone_append": false, 00:10:36.464 "compare": false, 00:10:36.464 "compare_and_write": false, 00:10:36.464 "abort": true, 00:10:36.464 "seek_hole": false, 00:10:36.464 "seek_data": false, 00:10:36.464 "copy": true, 00:10:36.464 "nvme_iov_md": false 00:10:36.464 }, 00:10:36.464 "memory_domains": [ 00:10:36.464 { 00:10:36.464 "dma_device_id": "system", 00:10:36.464 "dma_device_type": 1 00:10:36.464 }, 00:10:36.464 { 00:10:36.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.464 "dma_device_type": 2 00:10:36.464 } 00:10:36.464 ], 00:10:36.464 "driver_specific": {} 00:10:36.464 } 00:10:36.464 ] 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.464 BaseBdev3 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.464 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.724 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.725 [ 00:10:36.725 { 00:10:36.725 "name": "BaseBdev3", 00:10:36.725 "aliases": [ 00:10:36.725 "87757f7f-835b-4ed0-adc6-2ede0b6a485d" 00:10:36.725 ], 00:10:36.725 "product_name": "Malloc disk", 00:10:36.725 "block_size": 512, 00:10:36.725 "num_blocks": 65536, 00:10:36.725 "uuid": "87757f7f-835b-4ed0-adc6-2ede0b6a485d", 00:10:36.725 "assigned_rate_limits": { 00:10:36.725 "rw_ios_per_sec": 0, 00:10:36.725 "rw_mbytes_per_sec": 0, 00:10:36.725 "r_mbytes_per_sec": 0, 00:10:36.725 "w_mbytes_per_sec": 0 00:10:36.725 }, 00:10:36.725 "claimed": false, 00:10:36.725 "zoned": false, 00:10:36.725 "supported_io_types": { 00:10:36.725 "read": true, 00:10:36.725 "write": true, 00:10:36.725 "unmap": true, 00:10:36.725 "flush": true, 00:10:36.725 "reset": true, 00:10:36.725 "nvme_admin": false, 00:10:36.725 "nvme_io": false, 00:10:36.725 "nvme_io_md": false, 00:10:36.725 "write_zeroes": true, 00:10:36.725 "zcopy": true, 00:10:36.725 "get_zone_info": false, 00:10:36.725 "zone_management": false, 00:10:36.725 "zone_append": false, 00:10:36.725 "compare": false, 00:10:36.725 "compare_and_write": false, 00:10:36.725 "abort": true, 00:10:36.725 "seek_hole": false, 00:10:36.725 "seek_data": false, 00:10:36.725 "copy": true, 00:10:36.725 "nvme_iov_md": false 00:10:36.725 }, 00:10:36.725 "memory_domains": [ 00:10:36.725 { 00:10:36.725 "dma_device_id": "system", 00:10:36.725 "dma_device_type": 1 00:10:36.725 }, 00:10:36.725 { 00:10:36.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.725 "dma_device_type": 2 00:10:36.725 } 00:10:36.725 ], 00:10:36.725 "driver_specific": {} 00:10:36.725 } 00:10:36.725 ] 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.725 BaseBdev4 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.725 [ 00:10:36.725 { 00:10:36.725 "name": "BaseBdev4", 00:10:36.725 "aliases": [ 00:10:36.725 "bbe027e9-ca06-4151-84dc-470423c9f12b" 00:10:36.725 ], 00:10:36.725 "product_name": "Malloc disk", 00:10:36.725 "block_size": 512, 00:10:36.725 "num_blocks": 65536, 00:10:36.725 "uuid": "bbe027e9-ca06-4151-84dc-470423c9f12b", 00:10:36.725 "assigned_rate_limits": { 00:10:36.725 "rw_ios_per_sec": 0, 00:10:36.725 "rw_mbytes_per_sec": 0, 00:10:36.725 "r_mbytes_per_sec": 0, 00:10:36.725 "w_mbytes_per_sec": 0 00:10:36.725 }, 00:10:36.725 "claimed": false, 00:10:36.725 "zoned": false, 00:10:36.725 "supported_io_types": { 00:10:36.725 "read": true, 00:10:36.725 "write": true, 00:10:36.725 "unmap": true, 00:10:36.725 "flush": true, 00:10:36.725 "reset": true, 00:10:36.725 "nvme_admin": false, 00:10:36.725 "nvme_io": false, 00:10:36.725 "nvme_io_md": false, 00:10:36.725 "write_zeroes": true, 00:10:36.725 "zcopy": true, 00:10:36.725 "get_zone_info": false, 00:10:36.725 "zone_management": false, 00:10:36.725 "zone_append": false, 00:10:36.725 "compare": false, 00:10:36.725 "compare_and_write": false, 00:10:36.725 "abort": true, 00:10:36.725 "seek_hole": false, 00:10:36.725 "seek_data": false, 00:10:36.725 "copy": true, 00:10:36.725 "nvme_iov_md": false 00:10:36.725 }, 00:10:36.725 "memory_domains": [ 00:10:36.725 { 00:10:36.725 "dma_device_id": "system", 00:10:36.725 "dma_device_type": 1 00:10:36.725 }, 00:10:36.725 { 00:10:36.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.725 "dma_device_type": 2 00:10:36.725 } 00:10:36.725 ], 00:10:36.725 "driver_specific": {} 00:10:36.725 } 00:10:36.725 ] 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.725 [2024-11-26 17:55:18.448377] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.725 [2024-11-26 17:55:18.448538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.725 [2024-11-26 17:55:18.448601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.725 [2024-11-26 17:55:18.450882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.725 [2024-11-26 17:55:18.451012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.725 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.725 "name": "Existed_Raid", 00:10:36.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.725 "strip_size_kb": 64, 00:10:36.725 "state": "configuring", 00:10:36.725 "raid_level": "raid0", 00:10:36.725 "superblock": false, 00:10:36.725 "num_base_bdevs": 4, 00:10:36.725 "num_base_bdevs_discovered": 3, 00:10:36.725 "num_base_bdevs_operational": 4, 00:10:36.725 "base_bdevs_list": [ 00:10:36.725 { 00:10:36.725 "name": "BaseBdev1", 00:10:36.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.725 "is_configured": false, 00:10:36.725 "data_offset": 0, 00:10:36.725 "data_size": 0 00:10:36.725 }, 00:10:36.725 { 00:10:36.725 "name": "BaseBdev2", 00:10:36.725 "uuid": "f3a49f06-efdf-4cbe-a14b-99832520b20c", 00:10:36.725 "is_configured": true, 00:10:36.725 "data_offset": 0, 00:10:36.725 "data_size": 65536 00:10:36.725 }, 00:10:36.725 { 00:10:36.725 "name": "BaseBdev3", 00:10:36.725 "uuid": "87757f7f-835b-4ed0-adc6-2ede0b6a485d", 00:10:36.725 "is_configured": true, 00:10:36.725 "data_offset": 0, 00:10:36.725 "data_size": 65536 00:10:36.725 }, 00:10:36.725 { 00:10:36.725 "name": "BaseBdev4", 00:10:36.725 "uuid": "bbe027e9-ca06-4151-84dc-470423c9f12b", 00:10:36.725 "is_configured": true, 00:10:36.725 "data_offset": 0, 00:10:36.726 "data_size": 65536 00:10:36.726 } 00:10:36.726 ] 00:10:36.726 }' 00:10:36.726 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.726 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.299 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:37.299 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.299 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.299 [2024-11-26 17:55:18.891655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:37.299 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.299 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.299 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.299 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.299 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.299 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.299 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.300 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.300 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.300 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.300 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.300 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.300 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.300 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.300 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.300 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.300 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.300 "name": "Existed_Raid", 00:10:37.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.300 "strip_size_kb": 64, 00:10:37.300 "state": "configuring", 00:10:37.300 "raid_level": "raid0", 00:10:37.300 "superblock": false, 00:10:37.300 "num_base_bdevs": 4, 00:10:37.300 "num_base_bdevs_discovered": 2, 00:10:37.300 "num_base_bdevs_operational": 4, 00:10:37.300 "base_bdevs_list": [ 00:10:37.300 { 00:10:37.300 "name": "BaseBdev1", 00:10:37.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.300 "is_configured": false, 00:10:37.300 "data_offset": 0, 00:10:37.300 "data_size": 0 00:10:37.300 }, 00:10:37.300 { 00:10:37.300 "name": null, 00:10:37.300 "uuid": "f3a49f06-efdf-4cbe-a14b-99832520b20c", 00:10:37.300 "is_configured": false, 00:10:37.300 "data_offset": 0, 00:10:37.300 "data_size": 65536 00:10:37.300 }, 00:10:37.300 { 00:10:37.300 "name": "BaseBdev3", 00:10:37.300 "uuid": "87757f7f-835b-4ed0-adc6-2ede0b6a485d", 00:10:37.300 "is_configured": true, 00:10:37.300 "data_offset": 0, 00:10:37.300 "data_size": 65536 00:10:37.300 }, 00:10:37.300 { 00:10:37.300 "name": "BaseBdev4", 00:10:37.300 "uuid": "bbe027e9-ca06-4151-84dc-470423c9f12b", 00:10:37.300 "is_configured": true, 00:10:37.300 "data_offset": 0, 00:10:37.300 "data_size": 65536 00:10:37.300 } 00:10:37.300 ] 00:10:37.300 }' 00:10:37.300 17:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.300 17:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.559 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.559 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:37.559 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.559 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.559 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.820 [2024-11-26 17:55:19.469705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.820 BaseBdev1 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.820 [ 00:10:37.820 { 00:10:37.820 "name": "BaseBdev1", 00:10:37.820 "aliases": [ 00:10:37.820 "3b7383e1-ba90-45ca-a0d5-3a6d6fa5a44f" 00:10:37.820 ], 00:10:37.820 "product_name": "Malloc disk", 00:10:37.820 "block_size": 512, 00:10:37.820 "num_blocks": 65536, 00:10:37.820 "uuid": "3b7383e1-ba90-45ca-a0d5-3a6d6fa5a44f", 00:10:37.820 "assigned_rate_limits": { 00:10:37.820 "rw_ios_per_sec": 0, 00:10:37.820 "rw_mbytes_per_sec": 0, 00:10:37.820 "r_mbytes_per_sec": 0, 00:10:37.820 "w_mbytes_per_sec": 0 00:10:37.820 }, 00:10:37.820 "claimed": true, 00:10:37.820 "claim_type": "exclusive_write", 00:10:37.820 "zoned": false, 00:10:37.820 "supported_io_types": { 00:10:37.820 "read": true, 00:10:37.820 "write": true, 00:10:37.820 "unmap": true, 00:10:37.820 "flush": true, 00:10:37.820 "reset": true, 00:10:37.820 "nvme_admin": false, 00:10:37.820 "nvme_io": false, 00:10:37.820 "nvme_io_md": false, 00:10:37.820 "write_zeroes": true, 00:10:37.820 "zcopy": true, 00:10:37.820 "get_zone_info": false, 00:10:37.820 "zone_management": false, 00:10:37.820 "zone_append": false, 00:10:37.820 "compare": false, 00:10:37.820 "compare_and_write": false, 00:10:37.820 "abort": true, 00:10:37.820 "seek_hole": false, 00:10:37.820 "seek_data": false, 00:10:37.820 "copy": true, 00:10:37.820 "nvme_iov_md": false 00:10:37.820 }, 00:10:37.820 "memory_domains": [ 00:10:37.820 { 00:10:37.820 "dma_device_id": "system", 00:10:37.820 "dma_device_type": 1 00:10:37.820 }, 00:10:37.820 { 00:10:37.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.820 "dma_device_type": 2 00:10:37.820 } 00:10:37.820 ], 00:10:37.820 "driver_specific": {} 00:10:37.820 } 00:10:37.820 ] 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.820 "name": "Existed_Raid", 00:10:37.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.820 "strip_size_kb": 64, 00:10:37.820 "state": "configuring", 00:10:37.820 "raid_level": "raid0", 00:10:37.820 "superblock": false, 00:10:37.820 "num_base_bdevs": 4, 00:10:37.820 "num_base_bdevs_discovered": 3, 00:10:37.820 "num_base_bdevs_operational": 4, 00:10:37.820 "base_bdevs_list": [ 00:10:37.820 { 00:10:37.820 "name": "BaseBdev1", 00:10:37.820 "uuid": "3b7383e1-ba90-45ca-a0d5-3a6d6fa5a44f", 00:10:37.820 "is_configured": true, 00:10:37.820 "data_offset": 0, 00:10:37.820 "data_size": 65536 00:10:37.820 }, 00:10:37.820 { 00:10:37.820 "name": null, 00:10:37.820 "uuid": "f3a49f06-efdf-4cbe-a14b-99832520b20c", 00:10:37.820 "is_configured": false, 00:10:37.820 "data_offset": 0, 00:10:37.820 "data_size": 65536 00:10:37.820 }, 00:10:37.820 { 00:10:37.820 "name": "BaseBdev3", 00:10:37.820 "uuid": "87757f7f-835b-4ed0-adc6-2ede0b6a485d", 00:10:37.820 "is_configured": true, 00:10:37.820 "data_offset": 0, 00:10:37.820 "data_size": 65536 00:10:37.820 }, 00:10:37.820 { 00:10:37.820 "name": "BaseBdev4", 00:10:37.820 "uuid": "bbe027e9-ca06-4151-84dc-470423c9f12b", 00:10:37.820 "is_configured": true, 00:10:37.820 "data_offset": 0, 00:10:37.820 "data_size": 65536 00:10:37.820 } 00:10:37.820 ] 00:10:37.820 }' 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.820 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.392 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:38.392 17:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.392 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.392 17:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.392 [2024-11-26 17:55:20.021111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.392 "name": "Existed_Raid", 00:10:38.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.392 "strip_size_kb": 64, 00:10:38.392 "state": "configuring", 00:10:38.392 "raid_level": "raid0", 00:10:38.392 "superblock": false, 00:10:38.392 "num_base_bdevs": 4, 00:10:38.392 "num_base_bdevs_discovered": 2, 00:10:38.392 "num_base_bdevs_operational": 4, 00:10:38.392 "base_bdevs_list": [ 00:10:38.392 { 00:10:38.392 "name": "BaseBdev1", 00:10:38.392 "uuid": "3b7383e1-ba90-45ca-a0d5-3a6d6fa5a44f", 00:10:38.392 "is_configured": true, 00:10:38.392 "data_offset": 0, 00:10:38.392 "data_size": 65536 00:10:38.392 }, 00:10:38.392 { 00:10:38.392 "name": null, 00:10:38.392 "uuid": "f3a49f06-efdf-4cbe-a14b-99832520b20c", 00:10:38.392 "is_configured": false, 00:10:38.392 "data_offset": 0, 00:10:38.392 "data_size": 65536 00:10:38.392 }, 00:10:38.392 { 00:10:38.392 "name": null, 00:10:38.392 "uuid": "87757f7f-835b-4ed0-adc6-2ede0b6a485d", 00:10:38.392 "is_configured": false, 00:10:38.392 "data_offset": 0, 00:10:38.392 "data_size": 65536 00:10:38.392 }, 00:10:38.392 { 00:10:38.392 "name": "BaseBdev4", 00:10:38.392 "uuid": "bbe027e9-ca06-4151-84dc-470423c9f12b", 00:10:38.392 "is_configured": true, 00:10:38.392 "data_offset": 0, 00:10:38.392 "data_size": 65536 00:10:38.392 } 00:10:38.392 ] 00:10:38.392 }' 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.392 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.651 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.652 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.652 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.652 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:38.652 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.910 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.911 [2024-11-26 17:55:20.537102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.911 "name": "Existed_Raid", 00:10:38.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.911 "strip_size_kb": 64, 00:10:38.911 "state": "configuring", 00:10:38.911 "raid_level": "raid0", 00:10:38.911 "superblock": false, 00:10:38.911 "num_base_bdevs": 4, 00:10:38.911 "num_base_bdevs_discovered": 3, 00:10:38.911 "num_base_bdevs_operational": 4, 00:10:38.911 "base_bdevs_list": [ 00:10:38.911 { 00:10:38.911 "name": "BaseBdev1", 00:10:38.911 "uuid": "3b7383e1-ba90-45ca-a0d5-3a6d6fa5a44f", 00:10:38.911 "is_configured": true, 00:10:38.911 "data_offset": 0, 00:10:38.911 "data_size": 65536 00:10:38.911 }, 00:10:38.911 { 00:10:38.911 "name": null, 00:10:38.911 "uuid": "f3a49f06-efdf-4cbe-a14b-99832520b20c", 00:10:38.911 "is_configured": false, 00:10:38.911 "data_offset": 0, 00:10:38.911 "data_size": 65536 00:10:38.911 }, 00:10:38.911 { 00:10:38.911 "name": "BaseBdev3", 00:10:38.911 "uuid": "87757f7f-835b-4ed0-adc6-2ede0b6a485d", 00:10:38.911 "is_configured": true, 00:10:38.911 "data_offset": 0, 00:10:38.911 "data_size": 65536 00:10:38.911 }, 00:10:38.911 { 00:10:38.911 "name": "BaseBdev4", 00:10:38.911 "uuid": "bbe027e9-ca06-4151-84dc-470423c9f12b", 00:10:38.911 "is_configured": true, 00:10:38.911 "data_offset": 0, 00:10:38.911 "data_size": 65536 00:10:38.911 } 00:10:38.911 ] 00:10:38.911 }' 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.911 17:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.169 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.169 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.169 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.169 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:39.169 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.429 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.430 [2024-11-26 17:55:21.072282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.430 "name": "Existed_Raid", 00:10:39.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.430 "strip_size_kb": 64, 00:10:39.430 "state": "configuring", 00:10:39.430 "raid_level": "raid0", 00:10:39.430 "superblock": false, 00:10:39.430 "num_base_bdevs": 4, 00:10:39.430 "num_base_bdevs_discovered": 2, 00:10:39.430 "num_base_bdevs_operational": 4, 00:10:39.430 "base_bdevs_list": [ 00:10:39.430 { 00:10:39.430 "name": null, 00:10:39.430 "uuid": "3b7383e1-ba90-45ca-a0d5-3a6d6fa5a44f", 00:10:39.430 "is_configured": false, 00:10:39.430 "data_offset": 0, 00:10:39.430 "data_size": 65536 00:10:39.430 }, 00:10:39.430 { 00:10:39.430 "name": null, 00:10:39.430 "uuid": "f3a49f06-efdf-4cbe-a14b-99832520b20c", 00:10:39.430 "is_configured": false, 00:10:39.430 "data_offset": 0, 00:10:39.430 "data_size": 65536 00:10:39.430 }, 00:10:39.430 { 00:10:39.430 "name": "BaseBdev3", 00:10:39.430 "uuid": "87757f7f-835b-4ed0-adc6-2ede0b6a485d", 00:10:39.430 "is_configured": true, 00:10:39.430 "data_offset": 0, 00:10:39.430 "data_size": 65536 00:10:39.430 }, 00:10:39.430 { 00:10:39.430 "name": "BaseBdev4", 00:10:39.430 "uuid": "bbe027e9-ca06-4151-84dc-470423c9f12b", 00:10:39.430 "is_configured": true, 00:10:39.430 "data_offset": 0, 00:10:39.430 "data_size": 65536 00:10:39.430 } 00:10:39.430 ] 00:10:39.430 }' 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.430 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.000 [2024-11-26 17:55:21.699841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.000 "name": "Existed_Raid", 00:10:40.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.000 "strip_size_kb": 64, 00:10:40.000 "state": "configuring", 00:10:40.000 "raid_level": "raid0", 00:10:40.000 "superblock": false, 00:10:40.000 "num_base_bdevs": 4, 00:10:40.000 "num_base_bdevs_discovered": 3, 00:10:40.000 "num_base_bdevs_operational": 4, 00:10:40.000 "base_bdevs_list": [ 00:10:40.000 { 00:10:40.000 "name": null, 00:10:40.000 "uuid": "3b7383e1-ba90-45ca-a0d5-3a6d6fa5a44f", 00:10:40.000 "is_configured": false, 00:10:40.000 "data_offset": 0, 00:10:40.000 "data_size": 65536 00:10:40.000 }, 00:10:40.000 { 00:10:40.000 "name": "BaseBdev2", 00:10:40.000 "uuid": "f3a49f06-efdf-4cbe-a14b-99832520b20c", 00:10:40.000 "is_configured": true, 00:10:40.000 "data_offset": 0, 00:10:40.000 "data_size": 65536 00:10:40.000 }, 00:10:40.000 { 00:10:40.000 "name": "BaseBdev3", 00:10:40.000 "uuid": "87757f7f-835b-4ed0-adc6-2ede0b6a485d", 00:10:40.000 "is_configured": true, 00:10:40.000 "data_offset": 0, 00:10:40.000 "data_size": 65536 00:10:40.000 }, 00:10:40.000 { 00:10:40.000 "name": "BaseBdev4", 00:10:40.000 "uuid": "bbe027e9-ca06-4151-84dc-470423c9f12b", 00:10:40.000 "is_configured": true, 00:10:40.000 "data_offset": 0, 00:10:40.000 "data_size": 65536 00:10:40.000 } 00:10:40.000 ] 00:10:40.000 }' 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.000 17:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3b7383e1-ba90-45ca-a0d5-3a6d6fa5a44f 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.570 [2024-11-26 17:55:22.324098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:40.570 [2024-11-26 17:55:22.324262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:40.570 [2024-11-26 17:55:22.324275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:40.570 [2024-11-26 17:55:22.324602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:40.570 [2024-11-26 17:55:22.324772] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:40.570 [2024-11-26 17:55:22.324785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:40.570 [2024-11-26 17:55:22.325212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.570 NewBaseBdev 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.570 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.570 [ 00:10:40.570 { 00:10:40.570 "name": "NewBaseBdev", 00:10:40.570 "aliases": [ 00:10:40.570 "3b7383e1-ba90-45ca-a0d5-3a6d6fa5a44f" 00:10:40.570 ], 00:10:40.570 "product_name": "Malloc disk", 00:10:40.570 "block_size": 512, 00:10:40.570 "num_blocks": 65536, 00:10:40.570 "uuid": "3b7383e1-ba90-45ca-a0d5-3a6d6fa5a44f", 00:10:40.570 "assigned_rate_limits": { 00:10:40.570 "rw_ios_per_sec": 0, 00:10:40.570 "rw_mbytes_per_sec": 0, 00:10:40.570 "r_mbytes_per_sec": 0, 00:10:40.570 "w_mbytes_per_sec": 0 00:10:40.570 }, 00:10:40.570 "claimed": true, 00:10:40.570 "claim_type": "exclusive_write", 00:10:40.571 "zoned": false, 00:10:40.571 "supported_io_types": { 00:10:40.571 "read": true, 00:10:40.571 "write": true, 00:10:40.571 "unmap": true, 00:10:40.571 "flush": true, 00:10:40.571 "reset": true, 00:10:40.571 "nvme_admin": false, 00:10:40.571 "nvme_io": false, 00:10:40.571 "nvme_io_md": false, 00:10:40.571 "write_zeroes": true, 00:10:40.571 "zcopy": true, 00:10:40.571 "get_zone_info": false, 00:10:40.571 "zone_management": false, 00:10:40.571 "zone_append": false, 00:10:40.571 "compare": false, 00:10:40.571 "compare_and_write": false, 00:10:40.571 "abort": true, 00:10:40.571 "seek_hole": false, 00:10:40.571 "seek_data": false, 00:10:40.571 "copy": true, 00:10:40.571 "nvme_iov_md": false 00:10:40.571 }, 00:10:40.571 "memory_domains": [ 00:10:40.571 { 00:10:40.571 "dma_device_id": "system", 00:10:40.571 "dma_device_type": 1 00:10:40.571 }, 00:10:40.571 { 00:10:40.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.571 "dma_device_type": 2 00:10:40.571 } 00:10:40.571 ], 00:10:40.571 "driver_specific": {} 00:10:40.571 } 00:10:40.571 ] 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.571 "name": "Existed_Raid", 00:10:40.571 "uuid": "809dd069-54c5-4c52-a38c-f0ffab4f685d", 00:10:40.571 "strip_size_kb": 64, 00:10:40.571 "state": "online", 00:10:40.571 "raid_level": "raid0", 00:10:40.571 "superblock": false, 00:10:40.571 "num_base_bdevs": 4, 00:10:40.571 "num_base_bdevs_discovered": 4, 00:10:40.571 "num_base_bdevs_operational": 4, 00:10:40.571 "base_bdevs_list": [ 00:10:40.571 { 00:10:40.571 "name": "NewBaseBdev", 00:10:40.571 "uuid": "3b7383e1-ba90-45ca-a0d5-3a6d6fa5a44f", 00:10:40.571 "is_configured": true, 00:10:40.571 "data_offset": 0, 00:10:40.571 "data_size": 65536 00:10:40.571 }, 00:10:40.571 { 00:10:40.571 "name": "BaseBdev2", 00:10:40.571 "uuid": "f3a49f06-efdf-4cbe-a14b-99832520b20c", 00:10:40.571 "is_configured": true, 00:10:40.571 "data_offset": 0, 00:10:40.571 "data_size": 65536 00:10:40.571 }, 00:10:40.571 { 00:10:40.571 "name": "BaseBdev3", 00:10:40.571 "uuid": "87757f7f-835b-4ed0-adc6-2ede0b6a485d", 00:10:40.571 "is_configured": true, 00:10:40.571 "data_offset": 0, 00:10:40.571 "data_size": 65536 00:10:40.571 }, 00:10:40.571 { 00:10:40.571 "name": "BaseBdev4", 00:10:40.571 "uuid": "bbe027e9-ca06-4151-84dc-470423c9f12b", 00:10:40.571 "is_configured": true, 00:10:40.571 "data_offset": 0, 00:10:40.571 "data_size": 65536 00:10:40.571 } 00:10:40.571 ] 00:10:40.571 }' 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.571 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.139 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:41.139 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:41.139 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.139 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.139 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.139 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.139 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.139 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:41.139 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.140 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.140 [2024-11-26 17:55:22.811782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.140 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.140 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.140 "name": "Existed_Raid", 00:10:41.140 "aliases": [ 00:10:41.140 "809dd069-54c5-4c52-a38c-f0ffab4f685d" 00:10:41.140 ], 00:10:41.140 "product_name": "Raid Volume", 00:10:41.140 "block_size": 512, 00:10:41.140 "num_blocks": 262144, 00:10:41.140 "uuid": "809dd069-54c5-4c52-a38c-f0ffab4f685d", 00:10:41.140 "assigned_rate_limits": { 00:10:41.140 "rw_ios_per_sec": 0, 00:10:41.140 "rw_mbytes_per_sec": 0, 00:10:41.140 "r_mbytes_per_sec": 0, 00:10:41.140 "w_mbytes_per_sec": 0 00:10:41.140 }, 00:10:41.140 "claimed": false, 00:10:41.140 "zoned": false, 00:10:41.140 "supported_io_types": { 00:10:41.140 "read": true, 00:10:41.140 "write": true, 00:10:41.140 "unmap": true, 00:10:41.140 "flush": true, 00:10:41.140 "reset": true, 00:10:41.140 "nvme_admin": false, 00:10:41.140 "nvme_io": false, 00:10:41.140 "nvme_io_md": false, 00:10:41.140 "write_zeroes": true, 00:10:41.140 "zcopy": false, 00:10:41.140 "get_zone_info": false, 00:10:41.140 "zone_management": false, 00:10:41.140 "zone_append": false, 00:10:41.140 "compare": false, 00:10:41.140 "compare_and_write": false, 00:10:41.140 "abort": false, 00:10:41.140 "seek_hole": false, 00:10:41.140 "seek_data": false, 00:10:41.140 "copy": false, 00:10:41.140 "nvme_iov_md": false 00:10:41.140 }, 00:10:41.140 "memory_domains": [ 00:10:41.140 { 00:10:41.140 "dma_device_id": "system", 00:10:41.140 "dma_device_type": 1 00:10:41.140 }, 00:10:41.140 { 00:10:41.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.140 "dma_device_type": 2 00:10:41.140 }, 00:10:41.140 { 00:10:41.140 "dma_device_id": "system", 00:10:41.140 "dma_device_type": 1 00:10:41.140 }, 00:10:41.140 { 00:10:41.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.140 "dma_device_type": 2 00:10:41.140 }, 00:10:41.140 { 00:10:41.140 "dma_device_id": "system", 00:10:41.140 "dma_device_type": 1 00:10:41.140 }, 00:10:41.140 { 00:10:41.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.140 "dma_device_type": 2 00:10:41.140 }, 00:10:41.140 { 00:10:41.140 "dma_device_id": "system", 00:10:41.140 "dma_device_type": 1 00:10:41.140 }, 00:10:41.140 { 00:10:41.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.140 "dma_device_type": 2 00:10:41.140 } 00:10:41.140 ], 00:10:41.140 "driver_specific": { 00:10:41.140 "raid": { 00:10:41.140 "uuid": "809dd069-54c5-4c52-a38c-f0ffab4f685d", 00:10:41.140 "strip_size_kb": 64, 00:10:41.140 "state": "online", 00:10:41.140 "raid_level": "raid0", 00:10:41.140 "superblock": false, 00:10:41.140 "num_base_bdevs": 4, 00:10:41.140 "num_base_bdevs_discovered": 4, 00:10:41.140 "num_base_bdevs_operational": 4, 00:10:41.140 "base_bdevs_list": [ 00:10:41.140 { 00:10:41.140 "name": "NewBaseBdev", 00:10:41.140 "uuid": "3b7383e1-ba90-45ca-a0d5-3a6d6fa5a44f", 00:10:41.140 "is_configured": true, 00:10:41.140 "data_offset": 0, 00:10:41.140 "data_size": 65536 00:10:41.140 }, 00:10:41.140 { 00:10:41.140 "name": "BaseBdev2", 00:10:41.140 "uuid": "f3a49f06-efdf-4cbe-a14b-99832520b20c", 00:10:41.140 "is_configured": true, 00:10:41.140 "data_offset": 0, 00:10:41.140 "data_size": 65536 00:10:41.140 }, 00:10:41.140 { 00:10:41.140 "name": "BaseBdev3", 00:10:41.140 "uuid": "87757f7f-835b-4ed0-adc6-2ede0b6a485d", 00:10:41.140 "is_configured": true, 00:10:41.140 "data_offset": 0, 00:10:41.140 "data_size": 65536 00:10:41.140 }, 00:10:41.140 { 00:10:41.140 "name": "BaseBdev4", 00:10:41.140 "uuid": "bbe027e9-ca06-4151-84dc-470423c9f12b", 00:10:41.140 "is_configured": true, 00:10:41.140 "data_offset": 0, 00:10:41.140 "data_size": 65536 00:10:41.140 } 00:10:41.140 ] 00:10:41.140 } 00:10:41.140 } 00:10:41.140 }' 00:10:41.140 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.140 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:41.140 BaseBdev2 00:10:41.140 BaseBdev3 00:10:41.140 BaseBdev4' 00:10:41.140 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.140 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.140 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.140 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:41.140 17:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.140 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.140 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.140 17:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.400 [2024-11-26 17:55:23.142794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.400 [2024-11-26 17:55:23.142911] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.400 [2024-11-26 17:55:23.143078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.400 [2024-11-26 17:55:23.143194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.400 [2024-11-26 17:55:23.143250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69619 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69619 ']' 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69619 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69619 00:10:41.400 killing process with pid 69619 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69619' 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69619 00:10:41.400 [2024-11-26 17:55:23.189844] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.400 17:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69619 00:10:41.971 [2024-11-26 17:55:23.652155] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.367 17:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:43.367 00:10:43.367 real 0m12.456s 00:10:43.367 user 0m19.526s 00:10:43.367 sys 0m2.308s 00:10:43.367 17:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.367 17:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.367 ************************************ 00:10:43.367 END TEST raid_state_function_test 00:10:43.367 ************************************ 00:10:43.367 17:55:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:43.367 17:55:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:43.367 17:55:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.367 17:55:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.367 ************************************ 00:10:43.367 START TEST raid_state_function_test_sb 00:10:43.367 ************************************ 00:10:43.367 17:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:43.367 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:43.367 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:43.367 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:43.367 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:43.367 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70297 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70297' 00:10:43.368 Process raid pid: 70297 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70297 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70297 ']' 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.368 17:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.368 [2024-11-26 17:55:25.150873] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:10:43.368 [2024-11-26 17:55:25.151133] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.627 [2024-11-26 17:55:25.312852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.627 [2024-11-26 17:55:25.449128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.886 [2024-11-26 17:55:25.689070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.886 [2024-11-26 17:55:25.689152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.454 [2024-11-26 17:55:26.097150] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.454 [2024-11-26 17:55:26.097306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.454 [2024-11-26 17:55:26.097370] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.454 [2024-11-26 17:55:26.097429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.454 [2024-11-26 17:55:26.097471] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.454 [2024-11-26 17:55:26.097520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.454 [2024-11-26 17:55:26.097560] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:44.454 [2024-11-26 17:55:26.097608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.454 "name": "Existed_Raid", 00:10:44.454 "uuid": "ec32f2a2-0cac-49e5-b84b-7a301ee3214c", 00:10:44.454 "strip_size_kb": 64, 00:10:44.454 "state": "configuring", 00:10:44.454 "raid_level": "raid0", 00:10:44.454 "superblock": true, 00:10:44.454 "num_base_bdevs": 4, 00:10:44.454 "num_base_bdevs_discovered": 0, 00:10:44.454 "num_base_bdevs_operational": 4, 00:10:44.454 "base_bdevs_list": [ 00:10:44.454 { 00:10:44.454 "name": "BaseBdev1", 00:10:44.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.454 "is_configured": false, 00:10:44.454 "data_offset": 0, 00:10:44.454 "data_size": 0 00:10:44.454 }, 00:10:44.454 { 00:10:44.454 "name": "BaseBdev2", 00:10:44.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.454 "is_configured": false, 00:10:44.454 "data_offset": 0, 00:10:44.454 "data_size": 0 00:10:44.454 }, 00:10:44.454 { 00:10:44.454 "name": "BaseBdev3", 00:10:44.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.454 "is_configured": false, 00:10:44.454 "data_offset": 0, 00:10:44.454 "data_size": 0 00:10:44.454 }, 00:10:44.454 { 00:10:44.454 "name": "BaseBdev4", 00:10:44.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.454 "is_configured": false, 00:10:44.454 "data_offset": 0, 00:10:44.454 "data_size": 0 00:10:44.454 } 00:10:44.454 ] 00:10:44.454 }' 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.454 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.713 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:44.713 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.713 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.713 [2024-11-26 17:55:26.520842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.713 [2024-11-26 17:55:26.521011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:44.713 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.713 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.713 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.713 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.713 [2024-11-26 17:55:26.532861] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.714 [2024-11-26 17:55:26.533005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.714 [2024-11-26 17:55:26.533076] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.714 [2024-11-26 17:55:26.533132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.714 [2024-11-26 17:55:26.533176] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.714 [2024-11-26 17:55:26.533213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.714 [2024-11-26 17:55:26.533259] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:44.714 [2024-11-26 17:55:26.533304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:44.714 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.714 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:44.714 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.714 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.972 [2024-11-26 17:55:26.587042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.972 BaseBdev1 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.972 [ 00:10:44.972 { 00:10:44.972 "name": "BaseBdev1", 00:10:44.972 "aliases": [ 00:10:44.972 "e65093cd-0933-4ebe-98ed-f2a6b007de3d" 00:10:44.972 ], 00:10:44.972 "product_name": "Malloc disk", 00:10:44.972 "block_size": 512, 00:10:44.972 "num_blocks": 65536, 00:10:44.972 "uuid": "e65093cd-0933-4ebe-98ed-f2a6b007de3d", 00:10:44.972 "assigned_rate_limits": { 00:10:44.972 "rw_ios_per_sec": 0, 00:10:44.972 "rw_mbytes_per_sec": 0, 00:10:44.972 "r_mbytes_per_sec": 0, 00:10:44.972 "w_mbytes_per_sec": 0 00:10:44.972 }, 00:10:44.972 "claimed": true, 00:10:44.972 "claim_type": "exclusive_write", 00:10:44.972 "zoned": false, 00:10:44.972 "supported_io_types": { 00:10:44.972 "read": true, 00:10:44.972 "write": true, 00:10:44.972 "unmap": true, 00:10:44.972 "flush": true, 00:10:44.972 "reset": true, 00:10:44.972 "nvme_admin": false, 00:10:44.972 "nvme_io": false, 00:10:44.972 "nvme_io_md": false, 00:10:44.972 "write_zeroes": true, 00:10:44.972 "zcopy": true, 00:10:44.972 "get_zone_info": false, 00:10:44.972 "zone_management": false, 00:10:44.972 "zone_append": false, 00:10:44.972 "compare": false, 00:10:44.972 "compare_and_write": false, 00:10:44.972 "abort": true, 00:10:44.972 "seek_hole": false, 00:10:44.972 "seek_data": false, 00:10:44.972 "copy": true, 00:10:44.972 "nvme_iov_md": false 00:10:44.972 }, 00:10:44.972 "memory_domains": [ 00:10:44.972 { 00:10:44.972 "dma_device_id": "system", 00:10:44.972 "dma_device_type": 1 00:10:44.972 }, 00:10:44.972 { 00:10:44.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.972 "dma_device_type": 2 00:10:44.972 } 00:10:44.972 ], 00:10:44.972 "driver_specific": {} 00:10:44.972 } 00:10:44.972 ] 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.972 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.972 "name": "Existed_Raid", 00:10:44.972 "uuid": "16d0c9d7-9633-4b65-948c-db774a4b4908", 00:10:44.972 "strip_size_kb": 64, 00:10:44.972 "state": "configuring", 00:10:44.972 "raid_level": "raid0", 00:10:44.972 "superblock": true, 00:10:44.972 "num_base_bdevs": 4, 00:10:44.972 "num_base_bdevs_discovered": 1, 00:10:44.972 "num_base_bdevs_operational": 4, 00:10:44.972 "base_bdevs_list": [ 00:10:44.972 { 00:10:44.972 "name": "BaseBdev1", 00:10:44.972 "uuid": "e65093cd-0933-4ebe-98ed-f2a6b007de3d", 00:10:44.972 "is_configured": true, 00:10:44.972 "data_offset": 2048, 00:10:44.973 "data_size": 63488 00:10:44.973 }, 00:10:44.973 { 00:10:44.973 "name": "BaseBdev2", 00:10:44.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.973 "is_configured": false, 00:10:44.973 "data_offset": 0, 00:10:44.973 "data_size": 0 00:10:44.973 }, 00:10:44.973 { 00:10:44.973 "name": "BaseBdev3", 00:10:44.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.973 "is_configured": false, 00:10:44.973 "data_offset": 0, 00:10:44.973 "data_size": 0 00:10:44.973 }, 00:10:44.973 { 00:10:44.973 "name": "BaseBdev4", 00:10:44.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.973 "is_configured": false, 00:10:44.973 "data_offset": 0, 00:10:44.973 "data_size": 0 00:10:44.973 } 00:10:44.973 ] 00:10:44.973 }' 00:10:44.973 17:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.973 17:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.542 [2024-11-26 17:55:27.118217] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.542 [2024-11-26 17:55:27.118301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.542 [2024-11-26 17:55:27.130318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.542 [2024-11-26 17:55:27.132596] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.542 [2024-11-26 17:55:27.132665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.542 [2024-11-26 17:55:27.132677] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.542 [2024-11-26 17:55:27.132690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.542 [2024-11-26 17:55:27.132699] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:45.542 [2024-11-26 17:55:27.132710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.542 "name": "Existed_Raid", 00:10:45.542 "uuid": "e680b9bf-fdd2-4c8c-b095-f20bb935f074", 00:10:45.542 "strip_size_kb": 64, 00:10:45.542 "state": "configuring", 00:10:45.542 "raid_level": "raid0", 00:10:45.542 "superblock": true, 00:10:45.542 "num_base_bdevs": 4, 00:10:45.542 "num_base_bdevs_discovered": 1, 00:10:45.542 "num_base_bdevs_operational": 4, 00:10:45.542 "base_bdevs_list": [ 00:10:45.542 { 00:10:45.542 "name": "BaseBdev1", 00:10:45.542 "uuid": "e65093cd-0933-4ebe-98ed-f2a6b007de3d", 00:10:45.542 "is_configured": true, 00:10:45.542 "data_offset": 2048, 00:10:45.542 "data_size": 63488 00:10:45.542 }, 00:10:45.542 { 00:10:45.542 "name": "BaseBdev2", 00:10:45.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.542 "is_configured": false, 00:10:45.542 "data_offset": 0, 00:10:45.542 "data_size": 0 00:10:45.542 }, 00:10:45.542 { 00:10:45.542 "name": "BaseBdev3", 00:10:45.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.542 "is_configured": false, 00:10:45.542 "data_offset": 0, 00:10:45.542 "data_size": 0 00:10:45.542 }, 00:10:45.542 { 00:10:45.542 "name": "BaseBdev4", 00:10:45.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.542 "is_configured": false, 00:10:45.542 "data_offset": 0, 00:10:45.542 "data_size": 0 00:10:45.542 } 00:10:45.542 ] 00:10:45.542 }' 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.542 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.803 [2024-11-26 17:55:27.626800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.803 BaseBdev2 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.803 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.803 [ 00:10:45.803 { 00:10:45.803 "name": "BaseBdev2", 00:10:45.803 "aliases": [ 00:10:45.803 "c22ed8dc-ab9a-492a-92a1-4d614bec1a62" 00:10:45.803 ], 00:10:45.803 "product_name": "Malloc disk", 00:10:45.803 "block_size": 512, 00:10:45.803 "num_blocks": 65536, 00:10:45.803 "uuid": "c22ed8dc-ab9a-492a-92a1-4d614bec1a62", 00:10:45.803 "assigned_rate_limits": { 00:10:45.803 "rw_ios_per_sec": 0, 00:10:45.803 "rw_mbytes_per_sec": 0, 00:10:45.803 "r_mbytes_per_sec": 0, 00:10:45.803 "w_mbytes_per_sec": 0 00:10:45.803 }, 00:10:45.803 "claimed": true, 00:10:45.803 "claim_type": "exclusive_write", 00:10:45.803 "zoned": false, 00:10:45.803 "supported_io_types": { 00:10:45.803 "read": true, 00:10:45.803 "write": true, 00:10:45.803 "unmap": true, 00:10:45.803 "flush": true, 00:10:45.803 "reset": true, 00:10:45.803 "nvme_admin": false, 00:10:45.803 "nvme_io": false, 00:10:45.803 "nvme_io_md": false, 00:10:45.803 "write_zeroes": true, 00:10:45.803 "zcopy": true, 00:10:45.803 "get_zone_info": false, 00:10:45.803 "zone_management": false, 00:10:45.803 "zone_append": false, 00:10:45.803 "compare": false, 00:10:45.803 "compare_and_write": false, 00:10:45.803 "abort": true, 00:10:45.803 "seek_hole": false, 00:10:45.803 "seek_data": false, 00:10:45.803 "copy": true, 00:10:45.803 "nvme_iov_md": false 00:10:46.064 }, 00:10:46.064 "memory_domains": [ 00:10:46.064 { 00:10:46.064 "dma_device_id": "system", 00:10:46.064 "dma_device_type": 1 00:10:46.064 }, 00:10:46.064 { 00:10:46.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.064 "dma_device_type": 2 00:10:46.064 } 00:10:46.064 ], 00:10:46.064 "driver_specific": {} 00:10:46.064 } 00:10:46.064 ] 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.064 "name": "Existed_Raid", 00:10:46.064 "uuid": "e680b9bf-fdd2-4c8c-b095-f20bb935f074", 00:10:46.064 "strip_size_kb": 64, 00:10:46.064 "state": "configuring", 00:10:46.064 "raid_level": "raid0", 00:10:46.064 "superblock": true, 00:10:46.064 "num_base_bdevs": 4, 00:10:46.064 "num_base_bdevs_discovered": 2, 00:10:46.064 "num_base_bdevs_operational": 4, 00:10:46.064 "base_bdevs_list": [ 00:10:46.064 { 00:10:46.064 "name": "BaseBdev1", 00:10:46.064 "uuid": "e65093cd-0933-4ebe-98ed-f2a6b007de3d", 00:10:46.064 "is_configured": true, 00:10:46.064 "data_offset": 2048, 00:10:46.064 "data_size": 63488 00:10:46.064 }, 00:10:46.064 { 00:10:46.064 "name": "BaseBdev2", 00:10:46.064 "uuid": "c22ed8dc-ab9a-492a-92a1-4d614bec1a62", 00:10:46.064 "is_configured": true, 00:10:46.064 "data_offset": 2048, 00:10:46.064 "data_size": 63488 00:10:46.064 }, 00:10:46.064 { 00:10:46.064 "name": "BaseBdev3", 00:10:46.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.064 "is_configured": false, 00:10:46.064 "data_offset": 0, 00:10:46.064 "data_size": 0 00:10:46.064 }, 00:10:46.064 { 00:10:46.064 "name": "BaseBdev4", 00:10:46.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.064 "is_configured": false, 00:10:46.064 "data_offset": 0, 00:10:46.064 "data_size": 0 00:10:46.064 } 00:10:46.064 ] 00:10:46.064 }' 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.064 17:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.323 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:46.323 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.323 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 [2024-11-26 17:55:28.202848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.583 BaseBdev3 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 [ 00:10:46.583 { 00:10:46.583 "name": "BaseBdev3", 00:10:46.583 "aliases": [ 00:10:46.583 "6554eca1-afab-423f-8a26-a7fcdecf0201" 00:10:46.583 ], 00:10:46.583 "product_name": "Malloc disk", 00:10:46.583 "block_size": 512, 00:10:46.583 "num_blocks": 65536, 00:10:46.583 "uuid": "6554eca1-afab-423f-8a26-a7fcdecf0201", 00:10:46.583 "assigned_rate_limits": { 00:10:46.583 "rw_ios_per_sec": 0, 00:10:46.583 "rw_mbytes_per_sec": 0, 00:10:46.583 "r_mbytes_per_sec": 0, 00:10:46.583 "w_mbytes_per_sec": 0 00:10:46.583 }, 00:10:46.583 "claimed": true, 00:10:46.583 "claim_type": "exclusive_write", 00:10:46.583 "zoned": false, 00:10:46.583 "supported_io_types": { 00:10:46.583 "read": true, 00:10:46.583 "write": true, 00:10:46.583 "unmap": true, 00:10:46.583 "flush": true, 00:10:46.583 "reset": true, 00:10:46.583 "nvme_admin": false, 00:10:46.583 "nvme_io": false, 00:10:46.583 "nvme_io_md": false, 00:10:46.583 "write_zeroes": true, 00:10:46.583 "zcopy": true, 00:10:46.583 "get_zone_info": false, 00:10:46.583 "zone_management": false, 00:10:46.583 "zone_append": false, 00:10:46.583 "compare": false, 00:10:46.583 "compare_and_write": false, 00:10:46.583 "abort": true, 00:10:46.583 "seek_hole": false, 00:10:46.583 "seek_data": false, 00:10:46.583 "copy": true, 00:10:46.583 "nvme_iov_md": false 00:10:46.583 }, 00:10:46.583 "memory_domains": [ 00:10:46.583 { 00:10:46.583 "dma_device_id": "system", 00:10:46.583 "dma_device_type": 1 00:10:46.583 }, 00:10:46.583 { 00:10:46.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.583 "dma_device_type": 2 00:10:46.583 } 00:10:46.583 ], 00:10:46.583 "driver_specific": {} 00:10:46.583 } 00:10:46.583 ] 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.583 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.584 "name": "Existed_Raid", 00:10:46.584 "uuid": "e680b9bf-fdd2-4c8c-b095-f20bb935f074", 00:10:46.584 "strip_size_kb": 64, 00:10:46.584 "state": "configuring", 00:10:46.584 "raid_level": "raid0", 00:10:46.584 "superblock": true, 00:10:46.584 "num_base_bdevs": 4, 00:10:46.584 "num_base_bdevs_discovered": 3, 00:10:46.584 "num_base_bdevs_operational": 4, 00:10:46.584 "base_bdevs_list": [ 00:10:46.584 { 00:10:46.584 "name": "BaseBdev1", 00:10:46.584 "uuid": "e65093cd-0933-4ebe-98ed-f2a6b007de3d", 00:10:46.584 "is_configured": true, 00:10:46.584 "data_offset": 2048, 00:10:46.584 "data_size": 63488 00:10:46.584 }, 00:10:46.584 { 00:10:46.584 "name": "BaseBdev2", 00:10:46.584 "uuid": "c22ed8dc-ab9a-492a-92a1-4d614bec1a62", 00:10:46.584 "is_configured": true, 00:10:46.584 "data_offset": 2048, 00:10:46.584 "data_size": 63488 00:10:46.584 }, 00:10:46.584 { 00:10:46.584 "name": "BaseBdev3", 00:10:46.584 "uuid": "6554eca1-afab-423f-8a26-a7fcdecf0201", 00:10:46.584 "is_configured": true, 00:10:46.584 "data_offset": 2048, 00:10:46.584 "data_size": 63488 00:10:46.584 }, 00:10:46.584 { 00:10:46.584 "name": "BaseBdev4", 00:10:46.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.584 "is_configured": false, 00:10:46.584 "data_offset": 0, 00:10:46.584 "data_size": 0 00:10:46.584 } 00:10:46.584 ] 00:10:46.584 }' 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.584 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.154 [2024-11-26 17:55:28.779241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.154 [2024-11-26 17:55:28.779723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:47.154 [2024-11-26 17:55:28.779806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:47.154 BaseBdev4 00:10:47.154 [2024-11-26 17:55:28.780225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:47.154 [2024-11-26 17:55:28.780455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:47.154 [2024-11-26 17:55:28.780509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.154 [2024-11-26 17:55:28.780758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.154 [ 00:10:47.154 { 00:10:47.154 "name": "BaseBdev4", 00:10:47.154 "aliases": [ 00:10:47.154 "0f9bbfdb-d107-4192-a6b1-a24c8d190a1f" 00:10:47.154 ], 00:10:47.154 "product_name": "Malloc disk", 00:10:47.154 "block_size": 512, 00:10:47.154 "num_blocks": 65536, 00:10:47.154 "uuid": "0f9bbfdb-d107-4192-a6b1-a24c8d190a1f", 00:10:47.154 "assigned_rate_limits": { 00:10:47.154 "rw_ios_per_sec": 0, 00:10:47.154 "rw_mbytes_per_sec": 0, 00:10:47.154 "r_mbytes_per_sec": 0, 00:10:47.154 "w_mbytes_per_sec": 0 00:10:47.154 }, 00:10:47.154 "claimed": true, 00:10:47.154 "claim_type": "exclusive_write", 00:10:47.154 "zoned": false, 00:10:47.154 "supported_io_types": { 00:10:47.154 "read": true, 00:10:47.154 "write": true, 00:10:47.154 "unmap": true, 00:10:47.154 "flush": true, 00:10:47.154 "reset": true, 00:10:47.154 "nvme_admin": false, 00:10:47.154 "nvme_io": false, 00:10:47.154 "nvme_io_md": false, 00:10:47.154 "write_zeroes": true, 00:10:47.154 "zcopy": true, 00:10:47.154 "get_zone_info": false, 00:10:47.154 "zone_management": false, 00:10:47.154 "zone_append": false, 00:10:47.154 "compare": false, 00:10:47.154 "compare_and_write": false, 00:10:47.154 "abort": true, 00:10:47.154 "seek_hole": false, 00:10:47.154 "seek_data": false, 00:10:47.154 "copy": true, 00:10:47.154 "nvme_iov_md": false 00:10:47.154 }, 00:10:47.154 "memory_domains": [ 00:10:47.154 { 00:10:47.154 "dma_device_id": "system", 00:10:47.154 "dma_device_type": 1 00:10:47.154 }, 00:10:47.154 { 00:10:47.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.154 "dma_device_type": 2 00:10:47.154 } 00:10:47.154 ], 00:10:47.154 "driver_specific": {} 00:10:47.154 } 00:10:47.154 ] 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.154 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.155 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.155 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.155 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.155 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.155 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.155 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.155 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.155 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.155 "name": "Existed_Raid", 00:10:47.155 "uuid": "e680b9bf-fdd2-4c8c-b095-f20bb935f074", 00:10:47.155 "strip_size_kb": 64, 00:10:47.155 "state": "online", 00:10:47.155 "raid_level": "raid0", 00:10:47.155 "superblock": true, 00:10:47.155 "num_base_bdevs": 4, 00:10:47.155 "num_base_bdevs_discovered": 4, 00:10:47.155 "num_base_bdevs_operational": 4, 00:10:47.155 "base_bdevs_list": [ 00:10:47.155 { 00:10:47.155 "name": "BaseBdev1", 00:10:47.155 "uuid": "e65093cd-0933-4ebe-98ed-f2a6b007de3d", 00:10:47.155 "is_configured": true, 00:10:47.155 "data_offset": 2048, 00:10:47.155 "data_size": 63488 00:10:47.155 }, 00:10:47.155 { 00:10:47.155 "name": "BaseBdev2", 00:10:47.155 "uuid": "c22ed8dc-ab9a-492a-92a1-4d614bec1a62", 00:10:47.155 "is_configured": true, 00:10:47.155 "data_offset": 2048, 00:10:47.155 "data_size": 63488 00:10:47.155 }, 00:10:47.155 { 00:10:47.155 "name": "BaseBdev3", 00:10:47.155 "uuid": "6554eca1-afab-423f-8a26-a7fcdecf0201", 00:10:47.155 "is_configured": true, 00:10:47.155 "data_offset": 2048, 00:10:47.155 "data_size": 63488 00:10:47.155 }, 00:10:47.155 { 00:10:47.155 "name": "BaseBdev4", 00:10:47.155 "uuid": "0f9bbfdb-d107-4192-a6b1-a24c8d190a1f", 00:10:47.155 "is_configured": true, 00:10:47.155 "data_offset": 2048, 00:10:47.155 "data_size": 63488 00:10:47.155 } 00:10:47.155 ] 00:10:47.155 }' 00:10:47.155 17:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.155 17:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.414 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.414 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:47.414 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.414 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.414 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.414 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.414 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:47.414 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.414 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.414 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.414 [2024-11-26 17:55:29.235559] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.414 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.672 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.672 "name": "Existed_Raid", 00:10:47.672 "aliases": [ 00:10:47.672 "e680b9bf-fdd2-4c8c-b095-f20bb935f074" 00:10:47.672 ], 00:10:47.672 "product_name": "Raid Volume", 00:10:47.672 "block_size": 512, 00:10:47.672 "num_blocks": 253952, 00:10:47.672 "uuid": "e680b9bf-fdd2-4c8c-b095-f20bb935f074", 00:10:47.672 "assigned_rate_limits": { 00:10:47.672 "rw_ios_per_sec": 0, 00:10:47.672 "rw_mbytes_per_sec": 0, 00:10:47.672 "r_mbytes_per_sec": 0, 00:10:47.672 "w_mbytes_per_sec": 0 00:10:47.672 }, 00:10:47.672 "claimed": false, 00:10:47.672 "zoned": false, 00:10:47.672 "supported_io_types": { 00:10:47.672 "read": true, 00:10:47.672 "write": true, 00:10:47.672 "unmap": true, 00:10:47.672 "flush": true, 00:10:47.672 "reset": true, 00:10:47.672 "nvme_admin": false, 00:10:47.672 "nvme_io": false, 00:10:47.672 "nvme_io_md": false, 00:10:47.672 "write_zeroes": true, 00:10:47.672 "zcopy": false, 00:10:47.672 "get_zone_info": false, 00:10:47.672 "zone_management": false, 00:10:47.672 "zone_append": false, 00:10:47.672 "compare": false, 00:10:47.672 "compare_and_write": false, 00:10:47.672 "abort": false, 00:10:47.672 "seek_hole": false, 00:10:47.672 "seek_data": false, 00:10:47.672 "copy": false, 00:10:47.672 "nvme_iov_md": false 00:10:47.672 }, 00:10:47.672 "memory_domains": [ 00:10:47.672 { 00:10:47.672 "dma_device_id": "system", 00:10:47.672 "dma_device_type": 1 00:10:47.672 }, 00:10:47.673 { 00:10:47.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.673 "dma_device_type": 2 00:10:47.673 }, 00:10:47.673 { 00:10:47.673 "dma_device_id": "system", 00:10:47.673 "dma_device_type": 1 00:10:47.673 }, 00:10:47.673 { 00:10:47.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.673 "dma_device_type": 2 00:10:47.673 }, 00:10:47.673 { 00:10:47.673 "dma_device_id": "system", 00:10:47.673 "dma_device_type": 1 00:10:47.673 }, 00:10:47.673 { 00:10:47.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.673 "dma_device_type": 2 00:10:47.673 }, 00:10:47.673 { 00:10:47.673 "dma_device_id": "system", 00:10:47.673 "dma_device_type": 1 00:10:47.673 }, 00:10:47.673 { 00:10:47.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.673 "dma_device_type": 2 00:10:47.673 } 00:10:47.673 ], 00:10:47.673 "driver_specific": { 00:10:47.673 "raid": { 00:10:47.673 "uuid": "e680b9bf-fdd2-4c8c-b095-f20bb935f074", 00:10:47.673 "strip_size_kb": 64, 00:10:47.673 "state": "online", 00:10:47.673 "raid_level": "raid0", 00:10:47.673 "superblock": true, 00:10:47.673 "num_base_bdevs": 4, 00:10:47.673 "num_base_bdevs_discovered": 4, 00:10:47.673 "num_base_bdevs_operational": 4, 00:10:47.673 "base_bdevs_list": [ 00:10:47.673 { 00:10:47.673 "name": "BaseBdev1", 00:10:47.673 "uuid": "e65093cd-0933-4ebe-98ed-f2a6b007de3d", 00:10:47.673 "is_configured": true, 00:10:47.673 "data_offset": 2048, 00:10:47.673 "data_size": 63488 00:10:47.673 }, 00:10:47.673 { 00:10:47.673 "name": "BaseBdev2", 00:10:47.673 "uuid": "c22ed8dc-ab9a-492a-92a1-4d614bec1a62", 00:10:47.673 "is_configured": true, 00:10:47.673 "data_offset": 2048, 00:10:47.673 "data_size": 63488 00:10:47.673 }, 00:10:47.673 { 00:10:47.673 "name": "BaseBdev3", 00:10:47.673 "uuid": "6554eca1-afab-423f-8a26-a7fcdecf0201", 00:10:47.673 "is_configured": true, 00:10:47.673 "data_offset": 2048, 00:10:47.673 "data_size": 63488 00:10:47.673 }, 00:10:47.673 { 00:10:47.673 "name": "BaseBdev4", 00:10:47.673 "uuid": "0f9bbfdb-d107-4192-a6b1-a24c8d190a1f", 00:10:47.673 "is_configured": true, 00:10:47.673 "data_offset": 2048, 00:10:47.673 "data_size": 63488 00:10:47.673 } 00:10:47.673 ] 00:10:47.673 } 00:10:47.673 } 00:10:47.673 }' 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:47.673 BaseBdev2 00:10:47.673 BaseBdev3 00:10:47.673 BaseBdev4' 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.673 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.933 [2024-11-26 17:55:29.586637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:47.933 [2024-11-26 17:55:29.586767] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.933 [2024-11-26 17:55:29.586880] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.933 "name": "Existed_Raid", 00:10:47.933 "uuid": "e680b9bf-fdd2-4c8c-b095-f20bb935f074", 00:10:47.933 "strip_size_kb": 64, 00:10:47.933 "state": "offline", 00:10:47.933 "raid_level": "raid0", 00:10:47.933 "superblock": true, 00:10:47.933 "num_base_bdevs": 4, 00:10:47.933 "num_base_bdevs_discovered": 3, 00:10:47.933 "num_base_bdevs_operational": 3, 00:10:47.933 "base_bdevs_list": [ 00:10:47.933 { 00:10:47.933 "name": null, 00:10:47.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.933 "is_configured": false, 00:10:47.933 "data_offset": 0, 00:10:47.933 "data_size": 63488 00:10:47.933 }, 00:10:47.933 { 00:10:47.933 "name": "BaseBdev2", 00:10:47.933 "uuid": "c22ed8dc-ab9a-492a-92a1-4d614bec1a62", 00:10:47.933 "is_configured": true, 00:10:47.933 "data_offset": 2048, 00:10:47.933 "data_size": 63488 00:10:47.933 }, 00:10:47.933 { 00:10:47.933 "name": "BaseBdev3", 00:10:47.933 "uuid": "6554eca1-afab-423f-8a26-a7fcdecf0201", 00:10:47.933 "is_configured": true, 00:10:47.933 "data_offset": 2048, 00:10:47.933 "data_size": 63488 00:10:47.933 }, 00:10:47.933 { 00:10:47.933 "name": "BaseBdev4", 00:10:47.933 "uuid": "0f9bbfdb-d107-4192-a6b1-a24c8d190a1f", 00:10:47.933 "is_configured": true, 00:10:47.933 "data_offset": 2048, 00:10:47.933 "data_size": 63488 00:10:47.933 } 00:10:47.933 ] 00:10:47.933 }' 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.933 17:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.501 [2024-11-26 17:55:30.212277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.501 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.760 [2024-11-26 17:55:30.385320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.760 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.760 [2024-11-26 17:55:30.562982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:48.760 [2024-11-26 17:55:30.563144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:49.019 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.019 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:49.019 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.020 BaseBdev2 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.020 [ 00:10:49.020 { 00:10:49.020 "name": "BaseBdev2", 00:10:49.020 "aliases": [ 00:10:49.020 "bfb33d1c-142e-4bff-b4b4-e5a82ef1b21b" 00:10:49.020 ], 00:10:49.020 "product_name": "Malloc disk", 00:10:49.020 "block_size": 512, 00:10:49.020 "num_blocks": 65536, 00:10:49.020 "uuid": "bfb33d1c-142e-4bff-b4b4-e5a82ef1b21b", 00:10:49.020 "assigned_rate_limits": { 00:10:49.020 "rw_ios_per_sec": 0, 00:10:49.020 "rw_mbytes_per_sec": 0, 00:10:49.020 "r_mbytes_per_sec": 0, 00:10:49.020 "w_mbytes_per_sec": 0 00:10:49.020 }, 00:10:49.020 "claimed": false, 00:10:49.020 "zoned": false, 00:10:49.020 "supported_io_types": { 00:10:49.020 "read": true, 00:10:49.020 "write": true, 00:10:49.020 "unmap": true, 00:10:49.020 "flush": true, 00:10:49.020 "reset": true, 00:10:49.020 "nvme_admin": false, 00:10:49.020 "nvme_io": false, 00:10:49.020 "nvme_io_md": false, 00:10:49.020 "write_zeroes": true, 00:10:49.020 "zcopy": true, 00:10:49.020 "get_zone_info": false, 00:10:49.020 "zone_management": false, 00:10:49.020 "zone_append": false, 00:10:49.020 "compare": false, 00:10:49.020 "compare_and_write": false, 00:10:49.020 "abort": true, 00:10:49.020 "seek_hole": false, 00:10:49.020 "seek_data": false, 00:10:49.020 "copy": true, 00:10:49.020 "nvme_iov_md": false 00:10:49.020 }, 00:10:49.020 "memory_domains": [ 00:10:49.020 { 00:10:49.020 "dma_device_id": "system", 00:10:49.020 "dma_device_type": 1 00:10:49.020 }, 00:10:49.020 { 00:10:49.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.020 "dma_device_type": 2 00:10:49.020 } 00:10:49.020 ], 00:10:49.020 "driver_specific": {} 00:10:49.020 } 00:10:49.020 ] 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.020 BaseBdev3 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:49.020 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.279 [ 00:10:49.279 { 00:10:49.279 "name": "BaseBdev3", 00:10:49.279 "aliases": [ 00:10:49.279 "9900b0f5-500a-453d-8b65-e4867cfa1389" 00:10:49.279 ], 00:10:49.279 "product_name": "Malloc disk", 00:10:49.279 "block_size": 512, 00:10:49.279 "num_blocks": 65536, 00:10:49.279 "uuid": "9900b0f5-500a-453d-8b65-e4867cfa1389", 00:10:49.279 "assigned_rate_limits": { 00:10:49.279 "rw_ios_per_sec": 0, 00:10:49.279 "rw_mbytes_per_sec": 0, 00:10:49.279 "r_mbytes_per_sec": 0, 00:10:49.279 "w_mbytes_per_sec": 0 00:10:49.279 }, 00:10:49.279 "claimed": false, 00:10:49.279 "zoned": false, 00:10:49.279 "supported_io_types": { 00:10:49.279 "read": true, 00:10:49.279 "write": true, 00:10:49.279 "unmap": true, 00:10:49.279 "flush": true, 00:10:49.279 "reset": true, 00:10:49.279 "nvme_admin": false, 00:10:49.279 "nvme_io": false, 00:10:49.279 "nvme_io_md": false, 00:10:49.279 "write_zeroes": true, 00:10:49.279 "zcopy": true, 00:10:49.279 "get_zone_info": false, 00:10:49.279 "zone_management": false, 00:10:49.279 "zone_append": false, 00:10:49.279 "compare": false, 00:10:49.279 "compare_and_write": false, 00:10:49.279 "abort": true, 00:10:49.279 "seek_hole": false, 00:10:49.279 "seek_data": false, 00:10:49.279 "copy": true, 00:10:49.279 "nvme_iov_md": false 00:10:49.279 }, 00:10:49.279 "memory_domains": [ 00:10:49.279 { 00:10:49.279 "dma_device_id": "system", 00:10:49.279 "dma_device_type": 1 00:10:49.279 }, 00:10:49.279 { 00:10:49.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.279 "dma_device_type": 2 00:10:49.279 } 00:10:49.279 ], 00:10:49.279 "driver_specific": {} 00:10:49.279 } 00:10:49.279 ] 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.279 BaseBdev4 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.279 [ 00:10:49.279 { 00:10:49.279 "name": "BaseBdev4", 00:10:49.279 "aliases": [ 00:10:49.279 "cf922b3c-a525-4fee-9d36-ef60f977d905" 00:10:49.279 ], 00:10:49.279 "product_name": "Malloc disk", 00:10:49.279 "block_size": 512, 00:10:49.279 "num_blocks": 65536, 00:10:49.279 "uuid": "cf922b3c-a525-4fee-9d36-ef60f977d905", 00:10:49.279 "assigned_rate_limits": { 00:10:49.279 "rw_ios_per_sec": 0, 00:10:49.279 "rw_mbytes_per_sec": 0, 00:10:49.279 "r_mbytes_per_sec": 0, 00:10:49.279 "w_mbytes_per_sec": 0 00:10:49.279 }, 00:10:49.279 "claimed": false, 00:10:49.279 "zoned": false, 00:10:49.279 "supported_io_types": { 00:10:49.279 "read": true, 00:10:49.279 "write": true, 00:10:49.279 "unmap": true, 00:10:49.279 "flush": true, 00:10:49.279 "reset": true, 00:10:49.279 "nvme_admin": false, 00:10:49.279 "nvme_io": false, 00:10:49.279 "nvme_io_md": false, 00:10:49.279 "write_zeroes": true, 00:10:49.279 "zcopy": true, 00:10:49.279 "get_zone_info": false, 00:10:49.279 "zone_management": false, 00:10:49.279 "zone_append": false, 00:10:49.279 "compare": false, 00:10:49.279 "compare_and_write": false, 00:10:49.279 "abort": true, 00:10:49.279 "seek_hole": false, 00:10:49.279 "seek_data": false, 00:10:49.279 "copy": true, 00:10:49.279 "nvme_iov_md": false 00:10:49.279 }, 00:10:49.279 "memory_domains": [ 00:10:49.279 { 00:10:49.279 "dma_device_id": "system", 00:10:49.279 "dma_device_type": 1 00:10:49.279 }, 00:10:49.279 { 00:10:49.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.279 "dma_device_type": 2 00:10:49.279 } 00:10:49.279 ], 00:10:49.279 "driver_specific": {} 00:10:49.279 } 00:10:49.279 ] 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.279 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.280 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.280 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.280 17:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.280 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.280 17:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.280 [2024-11-26 17:55:31.004806] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.280 [2024-11-26 17:55:31.004967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.280 [2024-11-26 17:55:31.005062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.280 [2024-11-26 17:55:31.007454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.280 [2024-11-26 17:55:31.007599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.280 "name": "Existed_Raid", 00:10:49.280 "uuid": "d69afae1-3a11-4ce8-a9ac-25924aa2ea40", 00:10:49.280 "strip_size_kb": 64, 00:10:49.280 "state": "configuring", 00:10:49.280 "raid_level": "raid0", 00:10:49.280 "superblock": true, 00:10:49.280 "num_base_bdevs": 4, 00:10:49.280 "num_base_bdevs_discovered": 3, 00:10:49.280 "num_base_bdevs_operational": 4, 00:10:49.280 "base_bdevs_list": [ 00:10:49.280 { 00:10:49.280 "name": "BaseBdev1", 00:10:49.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.280 "is_configured": false, 00:10:49.280 "data_offset": 0, 00:10:49.280 "data_size": 0 00:10:49.280 }, 00:10:49.280 { 00:10:49.280 "name": "BaseBdev2", 00:10:49.280 "uuid": "bfb33d1c-142e-4bff-b4b4-e5a82ef1b21b", 00:10:49.280 "is_configured": true, 00:10:49.280 "data_offset": 2048, 00:10:49.280 "data_size": 63488 00:10:49.280 }, 00:10:49.280 { 00:10:49.280 "name": "BaseBdev3", 00:10:49.280 "uuid": "9900b0f5-500a-453d-8b65-e4867cfa1389", 00:10:49.280 "is_configured": true, 00:10:49.280 "data_offset": 2048, 00:10:49.280 "data_size": 63488 00:10:49.280 }, 00:10:49.280 { 00:10:49.280 "name": "BaseBdev4", 00:10:49.280 "uuid": "cf922b3c-a525-4fee-9d36-ef60f977d905", 00:10:49.280 "is_configured": true, 00:10:49.280 "data_offset": 2048, 00:10:49.280 "data_size": 63488 00:10:49.280 } 00:10:49.280 ] 00:10:49.280 }' 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.280 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.848 [2024-11-26 17:55:31.488078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.848 "name": "Existed_Raid", 00:10:49.848 "uuid": "d69afae1-3a11-4ce8-a9ac-25924aa2ea40", 00:10:49.848 "strip_size_kb": 64, 00:10:49.848 "state": "configuring", 00:10:49.848 "raid_level": "raid0", 00:10:49.848 "superblock": true, 00:10:49.848 "num_base_bdevs": 4, 00:10:49.848 "num_base_bdevs_discovered": 2, 00:10:49.848 "num_base_bdevs_operational": 4, 00:10:49.848 "base_bdevs_list": [ 00:10:49.848 { 00:10:49.848 "name": "BaseBdev1", 00:10:49.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.848 "is_configured": false, 00:10:49.848 "data_offset": 0, 00:10:49.848 "data_size": 0 00:10:49.848 }, 00:10:49.848 { 00:10:49.848 "name": null, 00:10:49.848 "uuid": "bfb33d1c-142e-4bff-b4b4-e5a82ef1b21b", 00:10:49.848 "is_configured": false, 00:10:49.848 "data_offset": 0, 00:10:49.848 "data_size": 63488 00:10:49.848 }, 00:10:49.848 { 00:10:49.848 "name": "BaseBdev3", 00:10:49.848 "uuid": "9900b0f5-500a-453d-8b65-e4867cfa1389", 00:10:49.848 "is_configured": true, 00:10:49.848 "data_offset": 2048, 00:10:49.848 "data_size": 63488 00:10:49.848 }, 00:10:49.848 { 00:10:49.848 "name": "BaseBdev4", 00:10:49.848 "uuid": "cf922b3c-a525-4fee-9d36-ef60f977d905", 00:10:49.848 "is_configured": true, 00:10:49.848 "data_offset": 2048, 00:10:49.848 "data_size": 63488 00:10:49.848 } 00:10:49.848 ] 00:10:49.848 }' 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.848 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.108 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.108 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:50.108 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.108 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.108 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.368 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:50.368 17:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:50.368 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.368 17:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.368 [2024-11-26 17:55:32.031675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.368 BaseBdev1 00:10:50.368 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.368 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:50.368 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:50.368 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.368 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.368 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.369 [ 00:10:50.369 { 00:10:50.369 "name": "BaseBdev1", 00:10:50.369 "aliases": [ 00:10:50.369 "d3a21cf1-4dfe-4e94-9b43-9b4b2f8a95b3" 00:10:50.369 ], 00:10:50.369 "product_name": "Malloc disk", 00:10:50.369 "block_size": 512, 00:10:50.369 "num_blocks": 65536, 00:10:50.369 "uuid": "d3a21cf1-4dfe-4e94-9b43-9b4b2f8a95b3", 00:10:50.369 "assigned_rate_limits": { 00:10:50.369 "rw_ios_per_sec": 0, 00:10:50.369 "rw_mbytes_per_sec": 0, 00:10:50.369 "r_mbytes_per_sec": 0, 00:10:50.369 "w_mbytes_per_sec": 0 00:10:50.369 }, 00:10:50.369 "claimed": true, 00:10:50.369 "claim_type": "exclusive_write", 00:10:50.369 "zoned": false, 00:10:50.369 "supported_io_types": { 00:10:50.369 "read": true, 00:10:50.369 "write": true, 00:10:50.369 "unmap": true, 00:10:50.369 "flush": true, 00:10:50.369 "reset": true, 00:10:50.369 "nvme_admin": false, 00:10:50.369 "nvme_io": false, 00:10:50.369 "nvme_io_md": false, 00:10:50.369 "write_zeroes": true, 00:10:50.369 "zcopy": true, 00:10:50.369 "get_zone_info": false, 00:10:50.369 "zone_management": false, 00:10:50.369 "zone_append": false, 00:10:50.369 "compare": false, 00:10:50.369 "compare_and_write": false, 00:10:50.369 "abort": true, 00:10:50.369 "seek_hole": false, 00:10:50.369 "seek_data": false, 00:10:50.369 "copy": true, 00:10:50.369 "nvme_iov_md": false 00:10:50.369 }, 00:10:50.369 "memory_domains": [ 00:10:50.369 { 00:10:50.369 "dma_device_id": "system", 00:10:50.369 "dma_device_type": 1 00:10:50.369 }, 00:10:50.369 { 00:10:50.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.369 "dma_device_type": 2 00:10:50.369 } 00:10:50.369 ], 00:10:50.369 "driver_specific": {} 00:10:50.369 } 00:10:50.369 ] 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.369 "name": "Existed_Raid", 00:10:50.369 "uuid": "d69afae1-3a11-4ce8-a9ac-25924aa2ea40", 00:10:50.369 "strip_size_kb": 64, 00:10:50.369 "state": "configuring", 00:10:50.369 "raid_level": "raid0", 00:10:50.369 "superblock": true, 00:10:50.369 "num_base_bdevs": 4, 00:10:50.369 "num_base_bdevs_discovered": 3, 00:10:50.369 "num_base_bdevs_operational": 4, 00:10:50.369 "base_bdevs_list": [ 00:10:50.369 { 00:10:50.369 "name": "BaseBdev1", 00:10:50.369 "uuid": "d3a21cf1-4dfe-4e94-9b43-9b4b2f8a95b3", 00:10:50.369 "is_configured": true, 00:10:50.369 "data_offset": 2048, 00:10:50.369 "data_size": 63488 00:10:50.369 }, 00:10:50.369 { 00:10:50.369 "name": null, 00:10:50.369 "uuid": "bfb33d1c-142e-4bff-b4b4-e5a82ef1b21b", 00:10:50.369 "is_configured": false, 00:10:50.369 "data_offset": 0, 00:10:50.369 "data_size": 63488 00:10:50.369 }, 00:10:50.369 { 00:10:50.369 "name": "BaseBdev3", 00:10:50.369 "uuid": "9900b0f5-500a-453d-8b65-e4867cfa1389", 00:10:50.369 "is_configured": true, 00:10:50.369 "data_offset": 2048, 00:10:50.369 "data_size": 63488 00:10:50.369 }, 00:10:50.369 { 00:10:50.369 "name": "BaseBdev4", 00:10:50.369 "uuid": "cf922b3c-a525-4fee-9d36-ef60f977d905", 00:10:50.369 "is_configured": true, 00:10:50.369 "data_offset": 2048, 00:10:50.369 "data_size": 63488 00:10:50.369 } 00:10:50.369 ] 00:10:50.369 }' 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.369 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.938 [2024-11-26 17:55:32.550984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.938 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.939 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.939 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.939 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.939 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.939 "name": "Existed_Raid", 00:10:50.939 "uuid": "d69afae1-3a11-4ce8-a9ac-25924aa2ea40", 00:10:50.939 "strip_size_kb": 64, 00:10:50.939 "state": "configuring", 00:10:50.939 "raid_level": "raid0", 00:10:50.939 "superblock": true, 00:10:50.939 "num_base_bdevs": 4, 00:10:50.939 "num_base_bdevs_discovered": 2, 00:10:50.939 "num_base_bdevs_operational": 4, 00:10:50.939 "base_bdevs_list": [ 00:10:50.939 { 00:10:50.939 "name": "BaseBdev1", 00:10:50.939 "uuid": "d3a21cf1-4dfe-4e94-9b43-9b4b2f8a95b3", 00:10:50.939 "is_configured": true, 00:10:50.939 "data_offset": 2048, 00:10:50.939 "data_size": 63488 00:10:50.939 }, 00:10:50.939 { 00:10:50.939 "name": null, 00:10:50.939 "uuid": "bfb33d1c-142e-4bff-b4b4-e5a82ef1b21b", 00:10:50.939 "is_configured": false, 00:10:50.939 "data_offset": 0, 00:10:50.939 "data_size": 63488 00:10:50.939 }, 00:10:50.939 { 00:10:50.939 "name": null, 00:10:50.939 "uuid": "9900b0f5-500a-453d-8b65-e4867cfa1389", 00:10:50.939 "is_configured": false, 00:10:50.939 "data_offset": 0, 00:10:50.939 "data_size": 63488 00:10:50.939 }, 00:10:50.939 { 00:10:50.939 "name": "BaseBdev4", 00:10:50.939 "uuid": "cf922b3c-a525-4fee-9d36-ef60f977d905", 00:10:50.939 "is_configured": true, 00:10:50.939 "data_offset": 2048, 00:10:50.939 "data_size": 63488 00:10:50.939 } 00:10:50.939 ] 00:10:50.939 }' 00:10:50.939 17:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.939 17:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.198 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.198 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.198 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.199 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.460 [2024-11-26 17:55:33.098230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.460 "name": "Existed_Raid", 00:10:51.460 "uuid": "d69afae1-3a11-4ce8-a9ac-25924aa2ea40", 00:10:51.460 "strip_size_kb": 64, 00:10:51.460 "state": "configuring", 00:10:51.460 "raid_level": "raid0", 00:10:51.460 "superblock": true, 00:10:51.460 "num_base_bdevs": 4, 00:10:51.460 "num_base_bdevs_discovered": 3, 00:10:51.460 "num_base_bdevs_operational": 4, 00:10:51.460 "base_bdevs_list": [ 00:10:51.460 { 00:10:51.460 "name": "BaseBdev1", 00:10:51.460 "uuid": "d3a21cf1-4dfe-4e94-9b43-9b4b2f8a95b3", 00:10:51.460 "is_configured": true, 00:10:51.460 "data_offset": 2048, 00:10:51.460 "data_size": 63488 00:10:51.460 }, 00:10:51.460 { 00:10:51.460 "name": null, 00:10:51.460 "uuid": "bfb33d1c-142e-4bff-b4b4-e5a82ef1b21b", 00:10:51.460 "is_configured": false, 00:10:51.460 "data_offset": 0, 00:10:51.460 "data_size": 63488 00:10:51.460 }, 00:10:51.460 { 00:10:51.460 "name": "BaseBdev3", 00:10:51.460 "uuid": "9900b0f5-500a-453d-8b65-e4867cfa1389", 00:10:51.460 "is_configured": true, 00:10:51.460 "data_offset": 2048, 00:10:51.460 "data_size": 63488 00:10:51.460 }, 00:10:51.460 { 00:10:51.460 "name": "BaseBdev4", 00:10:51.460 "uuid": "cf922b3c-a525-4fee-9d36-ef60f977d905", 00:10:51.460 "is_configured": true, 00:10:51.460 "data_offset": 2048, 00:10:51.460 "data_size": 63488 00:10:51.460 } 00:10:51.460 ] 00:10:51.460 }' 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.460 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.030 [2024-11-26 17:55:33.637729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.030 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.030 "name": "Existed_Raid", 00:10:52.030 "uuid": "d69afae1-3a11-4ce8-a9ac-25924aa2ea40", 00:10:52.030 "strip_size_kb": 64, 00:10:52.030 "state": "configuring", 00:10:52.030 "raid_level": "raid0", 00:10:52.030 "superblock": true, 00:10:52.030 "num_base_bdevs": 4, 00:10:52.030 "num_base_bdevs_discovered": 2, 00:10:52.030 "num_base_bdevs_operational": 4, 00:10:52.030 "base_bdevs_list": [ 00:10:52.030 { 00:10:52.030 "name": null, 00:10:52.030 "uuid": "d3a21cf1-4dfe-4e94-9b43-9b4b2f8a95b3", 00:10:52.030 "is_configured": false, 00:10:52.031 "data_offset": 0, 00:10:52.031 "data_size": 63488 00:10:52.031 }, 00:10:52.031 { 00:10:52.031 "name": null, 00:10:52.031 "uuid": "bfb33d1c-142e-4bff-b4b4-e5a82ef1b21b", 00:10:52.031 "is_configured": false, 00:10:52.031 "data_offset": 0, 00:10:52.031 "data_size": 63488 00:10:52.031 }, 00:10:52.031 { 00:10:52.031 "name": "BaseBdev3", 00:10:52.031 "uuid": "9900b0f5-500a-453d-8b65-e4867cfa1389", 00:10:52.031 "is_configured": true, 00:10:52.031 "data_offset": 2048, 00:10:52.031 "data_size": 63488 00:10:52.031 }, 00:10:52.031 { 00:10:52.031 "name": "BaseBdev4", 00:10:52.031 "uuid": "cf922b3c-a525-4fee-9d36-ef60f977d905", 00:10:52.031 "is_configured": true, 00:10:52.031 "data_offset": 2048, 00:10:52.031 "data_size": 63488 00:10:52.031 } 00:10:52.031 ] 00:10:52.031 }' 00:10:52.031 17:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.031 17:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.599 [2024-11-26 17:55:34.274710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.599 "name": "Existed_Raid", 00:10:52.599 "uuid": "d69afae1-3a11-4ce8-a9ac-25924aa2ea40", 00:10:52.599 "strip_size_kb": 64, 00:10:52.599 "state": "configuring", 00:10:52.599 "raid_level": "raid0", 00:10:52.599 "superblock": true, 00:10:52.599 "num_base_bdevs": 4, 00:10:52.599 "num_base_bdevs_discovered": 3, 00:10:52.599 "num_base_bdevs_operational": 4, 00:10:52.599 "base_bdevs_list": [ 00:10:52.599 { 00:10:52.599 "name": null, 00:10:52.599 "uuid": "d3a21cf1-4dfe-4e94-9b43-9b4b2f8a95b3", 00:10:52.599 "is_configured": false, 00:10:52.599 "data_offset": 0, 00:10:52.599 "data_size": 63488 00:10:52.599 }, 00:10:52.599 { 00:10:52.599 "name": "BaseBdev2", 00:10:52.599 "uuid": "bfb33d1c-142e-4bff-b4b4-e5a82ef1b21b", 00:10:52.599 "is_configured": true, 00:10:52.599 "data_offset": 2048, 00:10:52.599 "data_size": 63488 00:10:52.599 }, 00:10:52.599 { 00:10:52.599 "name": "BaseBdev3", 00:10:52.599 "uuid": "9900b0f5-500a-453d-8b65-e4867cfa1389", 00:10:52.599 "is_configured": true, 00:10:52.599 "data_offset": 2048, 00:10:52.599 "data_size": 63488 00:10:52.599 }, 00:10:52.599 { 00:10:52.599 "name": "BaseBdev4", 00:10:52.599 "uuid": "cf922b3c-a525-4fee-9d36-ef60f977d905", 00:10:52.599 "is_configured": true, 00:10:52.599 "data_offset": 2048, 00:10:52.599 "data_size": 63488 00:10:52.599 } 00:10:52.599 ] 00:10:52.599 }' 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.599 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.858 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:52.858 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.858 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.858 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.858 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d3a21cf1-4dfe-4e94-9b43-9b4b2f8a95b3 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.118 [2024-11-26 17:55:34.831748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:53.118 [2024-11-26 17:55:34.832100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:53.118 [2024-11-26 17:55:34.832118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:53.118 [2024-11-26 17:55:34.832436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:53.118 [2024-11-26 17:55:34.832603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:53.118 [2024-11-26 17:55:34.832617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:53.118 [2024-11-26 17:55:34.832788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.118 NewBaseBdev 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.118 [ 00:10:53.118 { 00:10:53.118 "name": "NewBaseBdev", 00:10:53.118 "aliases": [ 00:10:53.118 "d3a21cf1-4dfe-4e94-9b43-9b4b2f8a95b3" 00:10:53.118 ], 00:10:53.118 "product_name": "Malloc disk", 00:10:53.118 "block_size": 512, 00:10:53.118 "num_blocks": 65536, 00:10:53.118 "uuid": "d3a21cf1-4dfe-4e94-9b43-9b4b2f8a95b3", 00:10:53.118 "assigned_rate_limits": { 00:10:53.118 "rw_ios_per_sec": 0, 00:10:53.118 "rw_mbytes_per_sec": 0, 00:10:53.118 "r_mbytes_per_sec": 0, 00:10:53.118 "w_mbytes_per_sec": 0 00:10:53.118 }, 00:10:53.118 "claimed": true, 00:10:53.118 "claim_type": "exclusive_write", 00:10:53.118 "zoned": false, 00:10:53.118 "supported_io_types": { 00:10:53.118 "read": true, 00:10:53.118 "write": true, 00:10:53.118 "unmap": true, 00:10:53.118 "flush": true, 00:10:53.118 "reset": true, 00:10:53.118 "nvme_admin": false, 00:10:53.118 "nvme_io": false, 00:10:53.118 "nvme_io_md": false, 00:10:53.118 "write_zeroes": true, 00:10:53.118 "zcopy": true, 00:10:53.118 "get_zone_info": false, 00:10:53.118 "zone_management": false, 00:10:53.118 "zone_append": false, 00:10:53.118 "compare": false, 00:10:53.118 "compare_and_write": false, 00:10:53.118 "abort": true, 00:10:53.118 "seek_hole": false, 00:10:53.118 "seek_data": false, 00:10:53.118 "copy": true, 00:10:53.118 "nvme_iov_md": false 00:10:53.118 }, 00:10:53.118 "memory_domains": [ 00:10:53.118 { 00:10:53.118 "dma_device_id": "system", 00:10:53.118 "dma_device_type": 1 00:10:53.118 }, 00:10:53.118 { 00:10:53.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.118 "dma_device_type": 2 00:10:53.118 } 00:10:53.118 ], 00:10:53.118 "driver_specific": {} 00:10:53.118 } 00:10:53.118 ] 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.118 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.118 "name": "Existed_Raid", 00:10:53.118 "uuid": "d69afae1-3a11-4ce8-a9ac-25924aa2ea40", 00:10:53.118 "strip_size_kb": 64, 00:10:53.118 "state": "online", 00:10:53.118 "raid_level": "raid0", 00:10:53.118 "superblock": true, 00:10:53.118 "num_base_bdevs": 4, 00:10:53.118 "num_base_bdevs_discovered": 4, 00:10:53.118 "num_base_bdevs_operational": 4, 00:10:53.118 "base_bdevs_list": [ 00:10:53.118 { 00:10:53.118 "name": "NewBaseBdev", 00:10:53.118 "uuid": "d3a21cf1-4dfe-4e94-9b43-9b4b2f8a95b3", 00:10:53.118 "is_configured": true, 00:10:53.118 "data_offset": 2048, 00:10:53.118 "data_size": 63488 00:10:53.118 }, 00:10:53.118 { 00:10:53.118 "name": "BaseBdev2", 00:10:53.118 "uuid": "bfb33d1c-142e-4bff-b4b4-e5a82ef1b21b", 00:10:53.118 "is_configured": true, 00:10:53.118 "data_offset": 2048, 00:10:53.118 "data_size": 63488 00:10:53.118 }, 00:10:53.118 { 00:10:53.118 "name": "BaseBdev3", 00:10:53.119 "uuid": "9900b0f5-500a-453d-8b65-e4867cfa1389", 00:10:53.119 "is_configured": true, 00:10:53.119 "data_offset": 2048, 00:10:53.119 "data_size": 63488 00:10:53.119 }, 00:10:53.119 { 00:10:53.119 "name": "BaseBdev4", 00:10:53.119 "uuid": "cf922b3c-a525-4fee-9d36-ef60f977d905", 00:10:53.119 "is_configured": true, 00:10:53.119 "data_offset": 2048, 00:10:53.119 "data_size": 63488 00:10:53.119 } 00:10:53.119 ] 00:10:53.119 }' 00:10:53.119 17:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.119 17:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.687 [2024-11-26 17:55:35.359610] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:53.687 "name": "Existed_Raid", 00:10:53.687 "aliases": [ 00:10:53.687 "d69afae1-3a11-4ce8-a9ac-25924aa2ea40" 00:10:53.687 ], 00:10:53.687 "product_name": "Raid Volume", 00:10:53.687 "block_size": 512, 00:10:53.687 "num_blocks": 253952, 00:10:53.687 "uuid": "d69afae1-3a11-4ce8-a9ac-25924aa2ea40", 00:10:53.687 "assigned_rate_limits": { 00:10:53.687 "rw_ios_per_sec": 0, 00:10:53.687 "rw_mbytes_per_sec": 0, 00:10:53.687 "r_mbytes_per_sec": 0, 00:10:53.687 "w_mbytes_per_sec": 0 00:10:53.687 }, 00:10:53.687 "claimed": false, 00:10:53.687 "zoned": false, 00:10:53.687 "supported_io_types": { 00:10:53.687 "read": true, 00:10:53.687 "write": true, 00:10:53.687 "unmap": true, 00:10:53.687 "flush": true, 00:10:53.687 "reset": true, 00:10:53.687 "nvme_admin": false, 00:10:53.687 "nvme_io": false, 00:10:53.687 "nvme_io_md": false, 00:10:53.687 "write_zeroes": true, 00:10:53.687 "zcopy": false, 00:10:53.687 "get_zone_info": false, 00:10:53.687 "zone_management": false, 00:10:53.687 "zone_append": false, 00:10:53.687 "compare": false, 00:10:53.687 "compare_and_write": false, 00:10:53.687 "abort": false, 00:10:53.687 "seek_hole": false, 00:10:53.687 "seek_data": false, 00:10:53.687 "copy": false, 00:10:53.687 "nvme_iov_md": false 00:10:53.687 }, 00:10:53.687 "memory_domains": [ 00:10:53.687 { 00:10:53.687 "dma_device_id": "system", 00:10:53.687 "dma_device_type": 1 00:10:53.687 }, 00:10:53.687 { 00:10:53.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.687 "dma_device_type": 2 00:10:53.687 }, 00:10:53.687 { 00:10:53.687 "dma_device_id": "system", 00:10:53.687 "dma_device_type": 1 00:10:53.687 }, 00:10:53.687 { 00:10:53.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.687 "dma_device_type": 2 00:10:53.687 }, 00:10:53.687 { 00:10:53.687 "dma_device_id": "system", 00:10:53.687 "dma_device_type": 1 00:10:53.687 }, 00:10:53.687 { 00:10:53.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.687 "dma_device_type": 2 00:10:53.687 }, 00:10:53.687 { 00:10:53.687 "dma_device_id": "system", 00:10:53.687 "dma_device_type": 1 00:10:53.687 }, 00:10:53.687 { 00:10:53.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.687 "dma_device_type": 2 00:10:53.687 } 00:10:53.687 ], 00:10:53.687 "driver_specific": { 00:10:53.687 "raid": { 00:10:53.687 "uuid": "d69afae1-3a11-4ce8-a9ac-25924aa2ea40", 00:10:53.687 "strip_size_kb": 64, 00:10:53.687 "state": "online", 00:10:53.687 "raid_level": "raid0", 00:10:53.687 "superblock": true, 00:10:53.687 "num_base_bdevs": 4, 00:10:53.687 "num_base_bdevs_discovered": 4, 00:10:53.687 "num_base_bdevs_operational": 4, 00:10:53.687 "base_bdevs_list": [ 00:10:53.687 { 00:10:53.687 "name": "NewBaseBdev", 00:10:53.687 "uuid": "d3a21cf1-4dfe-4e94-9b43-9b4b2f8a95b3", 00:10:53.687 "is_configured": true, 00:10:53.687 "data_offset": 2048, 00:10:53.687 "data_size": 63488 00:10:53.687 }, 00:10:53.687 { 00:10:53.687 "name": "BaseBdev2", 00:10:53.687 "uuid": "bfb33d1c-142e-4bff-b4b4-e5a82ef1b21b", 00:10:53.687 "is_configured": true, 00:10:53.687 "data_offset": 2048, 00:10:53.687 "data_size": 63488 00:10:53.687 }, 00:10:53.687 { 00:10:53.687 "name": "BaseBdev3", 00:10:53.687 "uuid": "9900b0f5-500a-453d-8b65-e4867cfa1389", 00:10:53.687 "is_configured": true, 00:10:53.687 "data_offset": 2048, 00:10:53.687 "data_size": 63488 00:10:53.687 }, 00:10:53.687 { 00:10:53.687 "name": "BaseBdev4", 00:10:53.687 "uuid": "cf922b3c-a525-4fee-9d36-ef60f977d905", 00:10:53.687 "is_configured": true, 00:10:53.687 "data_offset": 2048, 00:10:53.687 "data_size": 63488 00:10:53.687 } 00:10:53.687 ] 00:10:53.687 } 00:10:53.687 } 00:10:53.687 }' 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:53.687 BaseBdev2 00:10:53.687 BaseBdev3 00:10:53.687 BaseBdev4' 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.687 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.947 [2024-11-26 17:55:35.690638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.947 [2024-11-26 17:55:35.690691] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.947 [2024-11-26 17:55:35.690801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.947 [2024-11-26 17:55:35.690883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.947 [2024-11-26 17:55:35.690900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70297 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70297 ']' 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70297 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70297 00:10:53.947 killing process with pid 70297 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70297' 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70297 00:10:53.947 [2024-11-26 17:55:35.728646] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.947 17:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70297 00:10:54.513 [2024-11-26 17:55:36.218200] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.888 17:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:55.888 00:10:55.888 real 0m12.555s 00:10:55.888 user 0m19.672s 00:10:55.888 sys 0m2.175s 00:10:55.888 17:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.888 17:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.888 ************************************ 00:10:55.888 END TEST raid_state_function_test_sb 00:10:55.888 ************************************ 00:10:55.888 17:55:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:55.888 17:55:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:55.888 17:55:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.888 17:55:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.888 ************************************ 00:10:55.888 START TEST raid_superblock_test 00:10:55.888 ************************************ 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70982 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70982 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70982 ']' 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.888 17:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.145 [2024-11-26 17:55:37.767056] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:10:56.145 [2024-11-26 17:55:37.767193] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70982 ] 00:10:56.145 [2024-11-26 17:55:37.945187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.403 [2024-11-26 17:55:38.083215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.661 [2024-11-26 17:55:38.325814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.661 [2024-11-26 17:55:38.325889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.919 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.920 malloc1 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.920 [2024-11-26 17:55:38.732198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:56.920 [2024-11-26 17:55:38.732265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.920 [2024-11-26 17:55:38.732291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:56.920 [2024-11-26 17:55:38.732303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.920 [2024-11-26 17:55:38.734878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.920 [2024-11-26 17:55:38.734924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:56.920 pt1 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.920 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.179 malloc2 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.179 [2024-11-26 17:55:38.794915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:57.179 [2024-11-26 17:55:38.794979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.179 [2024-11-26 17:55:38.795010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:57.179 [2024-11-26 17:55:38.795034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.179 [2024-11-26 17:55:38.797499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.179 [2024-11-26 17:55:38.797542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:57.179 pt2 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.179 malloc3 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.179 [2024-11-26 17:55:38.870179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:57.179 [2024-11-26 17:55:38.870241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.179 [2024-11-26 17:55:38.870265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:57.179 [2024-11-26 17:55:38.870277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.179 [2024-11-26 17:55:38.872720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.179 [2024-11-26 17:55:38.872765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:57.179 pt3 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.179 malloc4 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.179 [2024-11-26 17:55:38.932065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:57.179 [2024-11-26 17:55:38.932132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.179 [2024-11-26 17:55:38.932159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:57.179 [2024-11-26 17:55:38.932171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.179 [2024-11-26 17:55:38.934654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.179 [2024-11-26 17:55:38.934699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:57.179 pt4 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.179 [2024-11-26 17:55:38.944083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:57.179 [2024-11-26 17:55:38.946190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:57.179 [2024-11-26 17:55:38.946295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:57.179 [2024-11-26 17:55:38.946362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:57.179 [2024-11-26 17:55:38.946570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:57.179 [2024-11-26 17:55:38.946591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:57.179 [2024-11-26 17:55:38.946898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:57.179 [2024-11-26 17:55:38.947126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:57.179 [2024-11-26 17:55:38.947150] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:57.179 [2024-11-26 17:55:38.947332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.179 "name": "raid_bdev1", 00:10:57.179 "uuid": "7cb4101c-ed6d-4859-9e84-640df28c964d", 00:10:57.179 "strip_size_kb": 64, 00:10:57.179 "state": "online", 00:10:57.179 "raid_level": "raid0", 00:10:57.179 "superblock": true, 00:10:57.179 "num_base_bdevs": 4, 00:10:57.179 "num_base_bdevs_discovered": 4, 00:10:57.179 "num_base_bdevs_operational": 4, 00:10:57.179 "base_bdevs_list": [ 00:10:57.179 { 00:10:57.179 "name": "pt1", 00:10:57.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.179 "is_configured": true, 00:10:57.179 "data_offset": 2048, 00:10:57.179 "data_size": 63488 00:10:57.179 }, 00:10:57.179 { 00:10:57.179 "name": "pt2", 00:10:57.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.179 "is_configured": true, 00:10:57.179 "data_offset": 2048, 00:10:57.179 "data_size": 63488 00:10:57.179 }, 00:10:57.179 { 00:10:57.179 "name": "pt3", 00:10:57.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.179 "is_configured": true, 00:10:57.179 "data_offset": 2048, 00:10:57.179 "data_size": 63488 00:10:57.179 }, 00:10:57.179 { 00:10:57.179 "name": "pt4", 00:10:57.179 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:57.179 "is_configured": true, 00:10:57.179 "data_offset": 2048, 00:10:57.179 "data_size": 63488 00:10:57.179 } 00:10:57.179 ] 00:10:57.179 }' 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.179 17:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.747 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:57.747 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:57.747 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.747 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.747 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.747 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.747 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.747 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.747 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.747 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.747 [2024-11-26 17:55:39.419655] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.747 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.747 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.747 "name": "raid_bdev1", 00:10:57.747 "aliases": [ 00:10:57.747 "7cb4101c-ed6d-4859-9e84-640df28c964d" 00:10:57.747 ], 00:10:57.747 "product_name": "Raid Volume", 00:10:57.747 "block_size": 512, 00:10:57.747 "num_blocks": 253952, 00:10:57.747 "uuid": "7cb4101c-ed6d-4859-9e84-640df28c964d", 00:10:57.747 "assigned_rate_limits": { 00:10:57.747 "rw_ios_per_sec": 0, 00:10:57.747 "rw_mbytes_per_sec": 0, 00:10:57.747 "r_mbytes_per_sec": 0, 00:10:57.747 "w_mbytes_per_sec": 0 00:10:57.747 }, 00:10:57.747 "claimed": false, 00:10:57.747 "zoned": false, 00:10:57.747 "supported_io_types": { 00:10:57.747 "read": true, 00:10:57.747 "write": true, 00:10:57.747 "unmap": true, 00:10:57.747 "flush": true, 00:10:57.747 "reset": true, 00:10:57.747 "nvme_admin": false, 00:10:57.747 "nvme_io": false, 00:10:57.747 "nvme_io_md": false, 00:10:57.747 "write_zeroes": true, 00:10:57.747 "zcopy": false, 00:10:57.747 "get_zone_info": false, 00:10:57.747 "zone_management": false, 00:10:57.747 "zone_append": false, 00:10:57.747 "compare": false, 00:10:57.747 "compare_and_write": false, 00:10:57.747 "abort": false, 00:10:57.747 "seek_hole": false, 00:10:57.747 "seek_data": false, 00:10:57.747 "copy": false, 00:10:57.747 "nvme_iov_md": false 00:10:57.747 }, 00:10:57.747 "memory_domains": [ 00:10:57.747 { 00:10:57.747 "dma_device_id": "system", 00:10:57.747 "dma_device_type": 1 00:10:57.747 }, 00:10:57.747 { 00:10:57.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.747 "dma_device_type": 2 00:10:57.747 }, 00:10:57.747 { 00:10:57.747 "dma_device_id": "system", 00:10:57.747 "dma_device_type": 1 00:10:57.747 }, 00:10:57.747 { 00:10:57.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.747 "dma_device_type": 2 00:10:57.747 }, 00:10:57.747 { 00:10:57.747 "dma_device_id": "system", 00:10:57.747 "dma_device_type": 1 00:10:57.747 }, 00:10:57.747 { 00:10:57.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.747 "dma_device_type": 2 00:10:57.747 }, 00:10:57.747 { 00:10:57.747 "dma_device_id": "system", 00:10:57.747 "dma_device_type": 1 00:10:57.747 }, 00:10:57.747 { 00:10:57.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.747 "dma_device_type": 2 00:10:57.747 } 00:10:57.747 ], 00:10:57.747 "driver_specific": { 00:10:57.747 "raid": { 00:10:57.747 "uuid": "7cb4101c-ed6d-4859-9e84-640df28c964d", 00:10:57.747 "strip_size_kb": 64, 00:10:57.747 "state": "online", 00:10:57.747 "raid_level": "raid0", 00:10:57.747 "superblock": true, 00:10:57.747 "num_base_bdevs": 4, 00:10:57.747 "num_base_bdevs_discovered": 4, 00:10:57.747 "num_base_bdevs_operational": 4, 00:10:57.747 "base_bdevs_list": [ 00:10:57.747 { 00:10:57.747 "name": "pt1", 00:10:57.747 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.747 "is_configured": true, 00:10:57.747 "data_offset": 2048, 00:10:57.747 "data_size": 63488 00:10:57.747 }, 00:10:57.747 { 00:10:57.747 "name": "pt2", 00:10:57.747 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.747 "is_configured": true, 00:10:57.747 "data_offset": 2048, 00:10:57.747 "data_size": 63488 00:10:57.747 }, 00:10:57.747 { 00:10:57.748 "name": "pt3", 00:10:57.748 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.748 "is_configured": true, 00:10:57.748 "data_offset": 2048, 00:10:57.748 "data_size": 63488 00:10:57.748 }, 00:10:57.748 { 00:10:57.748 "name": "pt4", 00:10:57.748 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:57.748 "is_configured": true, 00:10:57.748 "data_offset": 2048, 00:10:57.748 "data_size": 63488 00:10:57.748 } 00:10:57.748 ] 00:10:57.748 } 00:10:57.748 } 00:10:57.748 }' 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:57.748 pt2 00:10:57.748 pt3 00:10:57.748 pt4' 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.748 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.010 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:58.011 [2024-11-26 17:55:39.711193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7cb4101c-ed6d-4859-9e84-640df28c964d 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7cb4101c-ed6d-4859-9e84-640df28c964d ']' 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.011 [2024-11-26 17:55:39.758724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:58.011 [2024-11-26 17:55:39.758763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.011 [2024-11-26 17:55:39.758865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.011 [2024-11-26 17:55:39.758946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.011 [2024-11-26 17:55:39.758964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.011 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.290 [2024-11-26 17:55:39.930474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:58.290 [2024-11-26 17:55:39.932669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:58.290 [2024-11-26 17:55:39.932731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:58.290 [2024-11-26 17:55:39.932772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:58.290 [2024-11-26 17:55:39.932832] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:58.290 [2024-11-26 17:55:39.932908] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:58.290 [2024-11-26 17:55:39.932949] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:58.290 [2024-11-26 17:55:39.932979] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:58.290 [2024-11-26 17:55:39.933000] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:58.290 [2024-11-26 17:55:39.933035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:58.290 request: 00:10:58.290 { 00:10:58.290 "name": "raid_bdev1", 00:10:58.290 "raid_level": "raid0", 00:10:58.290 "base_bdevs": [ 00:10:58.290 "malloc1", 00:10:58.290 "malloc2", 00:10:58.290 "malloc3", 00:10:58.290 "malloc4" 00:10:58.290 ], 00:10:58.290 "strip_size_kb": 64, 00:10:58.290 "superblock": false, 00:10:58.290 "method": "bdev_raid_create", 00:10:58.290 "req_id": 1 00:10:58.290 } 00:10:58.290 Got JSON-RPC error response 00:10:58.290 response: 00:10:58.290 { 00:10:58.290 "code": -17, 00:10:58.290 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:58.290 } 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.290 17:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.290 [2024-11-26 17:55:40.002301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:58.290 [2024-11-26 17:55:40.002382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.290 [2024-11-26 17:55:40.002424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:58.290 [2024-11-26 17:55:40.002442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.290 [2024-11-26 17:55:40.005137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.290 [2024-11-26 17:55:40.005190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:58.290 [2024-11-26 17:55:40.005324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:58.290 [2024-11-26 17:55:40.005423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:58.290 pt1 00:10:58.290 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.290 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:58.290 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.290 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.290 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.290 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.290 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.290 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.290 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.291 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.291 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.291 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.291 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.291 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.291 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.291 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.291 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.291 "name": "raid_bdev1", 00:10:58.291 "uuid": "7cb4101c-ed6d-4859-9e84-640df28c964d", 00:10:58.291 "strip_size_kb": 64, 00:10:58.291 "state": "configuring", 00:10:58.291 "raid_level": "raid0", 00:10:58.291 "superblock": true, 00:10:58.291 "num_base_bdevs": 4, 00:10:58.291 "num_base_bdevs_discovered": 1, 00:10:58.291 "num_base_bdevs_operational": 4, 00:10:58.291 "base_bdevs_list": [ 00:10:58.291 { 00:10:58.291 "name": "pt1", 00:10:58.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.291 "is_configured": true, 00:10:58.291 "data_offset": 2048, 00:10:58.291 "data_size": 63488 00:10:58.291 }, 00:10:58.291 { 00:10:58.291 "name": null, 00:10:58.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.291 "is_configured": false, 00:10:58.291 "data_offset": 2048, 00:10:58.291 "data_size": 63488 00:10:58.291 }, 00:10:58.291 { 00:10:58.291 "name": null, 00:10:58.291 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.291 "is_configured": false, 00:10:58.291 "data_offset": 2048, 00:10:58.291 "data_size": 63488 00:10:58.291 }, 00:10:58.291 { 00:10:58.291 "name": null, 00:10:58.291 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.291 "is_configured": false, 00:10:58.291 "data_offset": 2048, 00:10:58.291 "data_size": 63488 00:10:58.291 } 00:10:58.291 ] 00:10:58.291 }' 00:10:58.291 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.291 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.859 [2024-11-26 17:55:40.477600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.859 [2024-11-26 17:55:40.477691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.859 [2024-11-26 17:55:40.477717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:58.859 [2024-11-26 17:55:40.477730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.859 [2024-11-26 17:55:40.478282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.859 [2024-11-26 17:55:40.478316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.859 [2024-11-26 17:55:40.478417] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:58.859 [2024-11-26 17:55:40.478453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.859 pt2 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.859 [2024-11-26 17:55:40.489596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.859 "name": "raid_bdev1", 00:10:58.859 "uuid": "7cb4101c-ed6d-4859-9e84-640df28c964d", 00:10:58.859 "strip_size_kb": 64, 00:10:58.859 "state": "configuring", 00:10:58.859 "raid_level": "raid0", 00:10:58.859 "superblock": true, 00:10:58.859 "num_base_bdevs": 4, 00:10:58.859 "num_base_bdevs_discovered": 1, 00:10:58.859 "num_base_bdevs_operational": 4, 00:10:58.859 "base_bdevs_list": [ 00:10:58.859 { 00:10:58.859 "name": "pt1", 00:10:58.859 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.859 "is_configured": true, 00:10:58.859 "data_offset": 2048, 00:10:58.859 "data_size": 63488 00:10:58.859 }, 00:10:58.859 { 00:10:58.859 "name": null, 00:10:58.859 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.859 "is_configured": false, 00:10:58.859 "data_offset": 0, 00:10:58.859 "data_size": 63488 00:10:58.859 }, 00:10:58.859 { 00:10:58.859 "name": null, 00:10:58.859 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.859 "is_configured": false, 00:10:58.859 "data_offset": 2048, 00:10:58.859 "data_size": 63488 00:10:58.859 }, 00:10:58.859 { 00:10:58.859 "name": null, 00:10:58.859 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.859 "is_configured": false, 00:10:58.859 "data_offset": 2048, 00:10:58.859 "data_size": 63488 00:10:58.859 } 00:10:58.859 ] 00:10:58.859 }' 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.859 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.118 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:59.118 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:59.118 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:59.118 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.118 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.118 [2024-11-26 17:55:40.944924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:59.118 [2024-11-26 17:55:40.945068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.118 [2024-11-26 17:55:40.945118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:59.118 [2024-11-26 17:55:40.945169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.118 [2024-11-26 17:55:40.945734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.118 [2024-11-26 17:55:40.945809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:59.118 [2024-11-26 17:55:40.945942] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:59.118 [2024-11-26 17:55:40.945998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:59.118 pt2 00:10:59.118 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.118 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:59.118 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:59.118 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:59.118 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.118 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.118 [2024-11-26 17:55:40.956857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:59.118 [2024-11-26 17:55:40.956966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.118 [2024-11-26 17:55:40.957028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:59.118 [2024-11-26 17:55:40.957064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.118 [2024-11-26 17:55:40.957542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.118 [2024-11-26 17:55:40.957609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:59.119 [2024-11-26 17:55:40.957723] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:59.119 [2024-11-26 17:55:40.957787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:59.119 pt3 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.119 [2024-11-26 17:55:40.968810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:59.119 [2024-11-26 17:55:40.968859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.119 [2024-11-26 17:55:40.968878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:59.119 [2024-11-26 17:55:40.968888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.119 [2024-11-26 17:55:40.969341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.119 [2024-11-26 17:55:40.969368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:59.119 [2024-11-26 17:55:40.969452] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:59.119 [2024-11-26 17:55:40.969476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:59.119 [2024-11-26 17:55:40.969636] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:59.119 [2024-11-26 17:55:40.969653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:59.119 [2024-11-26 17:55:40.969921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:59.119 [2024-11-26 17:55:40.970108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:59.119 [2024-11-26 17:55:40.970125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:59.119 [2024-11-26 17:55:40.970295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.119 pt4 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.119 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.378 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.378 17:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.378 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.378 17:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.378 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.378 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.378 "name": "raid_bdev1", 00:10:59.378 "uuid": "7cb4101c-ed6d-4859-9e84-640df28c964d", 00:10:59.378 "strip_size_kb": 64, 00:10:59.378 "state": "online", 00:10:59.378 "raid_level": "raid0", 00:10:59.378 "superblock": true, 00:10:59.378 "num_base_bdevs": 4, 00:10:59.378 "num_base_bdevs_discovered": 4, 00:10:59.378 "num_base_bdevs_operational": 4, 00:10:59.378 "base_bdevs_list": [ 00:10:59.378 { 00:10:59.378 "name": "pt1", 00:10:59.378 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.378 "is_configured": true, 00:10:59.378 "data_offset": 2048, 00:10:59.378 "data_size": 63488 00:10:59.378 }, 00:10:59.378 { 00:10:59.378 "name": "pt2", 00:10:59.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.378 "is_configured": true, 00:10:59.378 "data_offset": 2048, 00:10:59.378 "data_size": 63488 00:10:59.378 }, 00:10:59.378 { 00:10:59.378 "name": "pt3", 00:10:59.378 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.378 "is_configured": true, 00:10:59.378 "data_offset": 2048, 00:10:59.378 "data_size": 63488 00:10:59.378 }, 00:10:59.378 { 00:10:59.378 "name": "pt4", 00:10:59.378 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:59.378 "is_configured": true, 00:10:59.378 "data_offset": 2048, 00:10:59.378 "data_size": 63488 00:10:59.378 } 00:10:59.378 ] 00:10:59.378 }' 00:10:59.379 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.379 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.638 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:59.638 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:59.638 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.638 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.638 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.638 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.638 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:59.638 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.638 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.638 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.638 [2024-11-26 17:55:41.492407] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.898 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.898 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.898 "name": "raid_bdev1", 00:10:59.898 "aliases": [ 00:10:59.898 "7cb4101c-ed6d-4859-9e84-640df28c964d" 00:10:59.898 ], 00:10:59.898 "product_name": "Raid Volume", 00:10:59.898 "block_size": 512, 00:10:59.898 "num_blocks": 253952, 00:10:59.898 "uuid": "7cb4101c-ed6d-4859-9e84-640df28c964d", 00:10:59.898 "assigned_rate_limits": { 00:10:59.898 "rw_ios_per_sec": 0, 00:10:59.898 "rw_mbytes_per_sec": 0, 00:10:59.898 "r_mbytes_per_sec": 0, 00:10:59.898 "w_mbytes_per_sec": 0 00:10:59.898 }, 00:10:59.898 "claimed": false, 00:10:59.898 "zoned": false, 00:10:59.898 "supported_io_types": { 00:10:59.898 "read": true, 00:10:59.898 "write": true, 00:10:59.898 "unmap": true, 00:10:59.898 "flush": true, 00:10:59.898 "reset": true, 00:10:59.898 "nvme_admin": false, 00:10:59.898 "nvme_io": false, 00:10:59.898 "nvme_io_md": false, 00:10:59.898 "write_zeroes": true, 00:10:59.898 "zcopy": false, 00:10:59.898 "get_zone_info": false, 00:10:59.898 "zone_management": false, 00:10:59.898 "zone_append": false, 00:10:59.898 "compare": false, 00:10:59.898 "compare_and_write": false, 00:10:59.898 "abort": false, 00:10:59.898 "seek_hole": false, 00:10:59.898 "seek_data": false, 00:10:59.898 "copy": false, 00:10:59.898 "nvme_iov_md": false 00:10:59.898 }, 00:10:59.898 "memory_domains": [ 00:10:59.898 { 00:10:59.898 "dma_device_id": "system", 00:10:59.898 "dma_device_type": 1 00:10:59.898 }, 00:10:59.898 { 00:10:59.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.898 "dma_device_type": 2 00:10:59.898 }, 00:10:59.898 { 00:10:59.898 "dma_device_id": "system", 00:10:59.898 "dma_device_type": 1 00:10:59.898 }, 00:10:59.898 { 00:10:59.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.898 "dma_device_type": 2 00:10:59.898 }, 00:10:59.898 { 00:10:59.898 "dma_device_id": "system", 00:10:59.898 "dma_device_type": 1 00:10:59.898 }, 00:10:59.898 { 00:10:59.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.898 "dma_device_type": 2 00:10:59.898 }, 00:10:59.898 { 00:10:59.898 "dma_device_id": "system", 00:10:59.898 "dma_device_type": 1 00:10:59.898 }, 00:10:59.898 { 00:10:59.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.898 "dma_device_type": 2 00:10:59.898 } 00:10:59.898 ], 00:10:59.898 "driver_specific": { 00:10:59.898 "raid": { 00:10:59.898 "uuid": "7cb4101c-ed6d-4859-9e84-640df28c964d", 00:10:59.898 "strip_size_kb": 64, 00:10:59.899 "state": "online", 00:10:59.899 "raid_level": "raid0", 00:10:59.899 "superblock": true, 00:10:59.899 "num_base_bdevs": 4, 00:10:59.899 "num_base_bdevs_discovered": 4, 00:10:59.899 "num_base_bdevs_operational": 4, 00:10:59.899 "base_bdevs_list": [ 00:10:59.899 { 00:10:59.899 "name": "pt1", 00:10:59.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.899 "is_configured": true, 00:10:59.899 "data_offset": 2048, 00:10:59.899 "data_size": 63488 00:10:59.899 }, 00:10:59.899 { 00:10:59.899 "name": "pt2", 00:10:59.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.899 "is_configured": true, 00:10:59.899 "data_offset": 2048, 00:10:59.899 "data_size": 63488 00:10:59.899 }, 00:10:59.899 { 00:10:59.899 "name": "pt3", 00:10:59.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.899 "is_configured": true, 00:10:59.899 "data_offset": 2048, 00:10:59.899 "data_size": 63488 00:10:59.899 }, 00:10:59.899 { 00:10:59.899 "name": "pt4", 00:10:59.899 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:59.899 "is_configured": true, 00:10:59.899 "data_offset": 2048, 00:10:59.899 "data_size": 63488 00:10:59.899 } 00:10:59.899 ] 00:10:59.899 } 00:10:59.899 } 00:10:59.899 }' 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:59.899 pt2 00:10:59.899 pt3 00:10:59.899 pt4' 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.899 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.159 [2024-11-26 17:55:41.839713] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7cb4101c-ed6d-4859-9e84-640df28c964d '!=' 7cb4101c-ed6d-4859-9e84-640df28c964d ']' 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70982 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70982 ']' 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70982 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70982 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70982' 00:11:00.159 killing process with pid 70982 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70982 00:11:00.159 17:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70982 00:11:00.159 [2024-11-26 17:55:41.915704] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.159 [2024-11-26 17:55:41.915815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.159 [2024-11-26 17:55:41.915954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.159 [2024-11-26 17:55:41.916010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:00.729 [2024-11-26 17:55:42.365404] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.108 17:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:02.108 00:11:02.108 real 0m6.068s 00:11:02.108 user 0m8.639s 00:11:02.108 sys 0m0.982s 00:11:02.108 ************************************ 00:11:02.108 END TEST raid_superblock_test 00:11:02.108 ************************************ 00:11:02.108 17:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.108 17:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.108 17:55:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:02.108 17:55:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:02.108 17:55:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.108 17:55:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.108 ************************************ 00:11:02.108 START TEST raid_read_error_test 00:11:02.108 ************************************ 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:02.108 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.W4qLcnHf2a 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71251 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71251 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71251 ']' 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.109 17:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.109 [2024-11-26 17:55:43.931210] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:11:02.109 [2024-11-26 17:55:43.931430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71251 ] 00:11:02.368 [2024-11-26 17:55:44.112765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.628 [2024-11-26 17:55:44.252810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.887 [2024-11-26 17:55:44.498957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.887 [2024-11-26 17:55:44.499055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.146 BaseBdev1_malloc 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.146 true 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.146 [2024-11-26 17:55:44.918691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:03.146 [2024-11-26 17:55:44.918814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.146 [2024-11-26 17:55:44.918861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:03.146 [2024-11-26 17:55:44.918903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.146 [2024-11-26 17:55:44.921459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.146 [2024-11-26 17:55:44.921556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:03.146 BaseBdev1 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.146 BaseBdev2_malloc 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.146 true 00:11:03.146 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.147 17:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:03.147 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.147 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.147 [2024-11-26 17:55:44.992783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:03.147 [2024-11-26 17:55:44.992904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.147 [2024-11-26 17:55:44.992962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:03.147 [2024-11-26 17:55:44.993012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.147 [2024-11-26 17:55:44.995520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.147 [2024-11-26 17:55:44.995611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:03.147 BaseBdev2 00:11:03.147 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.147 17:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.147 17:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:03.147 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.147 17:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.408 BaseBdev3_malloc 00:11:03.408 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.409 true 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.409 [2024-11-26 17:55:45.081104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:03.409 [2024-11-26 17:55:45.081225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.409 [2024-11-26 17:55:45.081280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:03.409 [2024-11-26 17:55:45.081317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.409 [2024-11-26 17:55:45.083828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.409 [2024-11-26 17:55:45.083922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:03.409 BaseBdev3 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.409 BaseBdev4_malloc 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.409 true 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.409 [2024-11-26 17:55:45.155817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:03.409 [2024-11-26 17:55:45.155941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.409 [2024-11-26 17:55:45.155970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:03.409 [2024-11-26 17:55:45.155984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.409 [2024-11-26 17:55:45.158535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.409 [2024-11-26 17:55:45.158634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:03.409 BaseBdev4 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.409 [2024-11-26 17:55:45.167872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.409 [2024-11-26 17:55:45.170115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.409 [2024-11-26 17:55:45.170255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.409 [2024-11-26 17:55:45.170370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:03.409 [2024-11-26 17:55:45.170674] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:03.409 [2024-11-26 17:55:45.170739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:03.409 [2024-11-26 17:55:45.171085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:03.409 [2024-11-26 17:55:45.171330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:03.409 [2024-11-26 17:55:45.171385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:03.409 [2024-11-26 17:55:45.171627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.409 "name": "raid_bdev1", 00:11:03.409 "uuid": "c0131f26-6931-439d-a438-2b5a67dc6d8d", 00:11:03.409 "strip_size_kb": 64, 00:11:03.409 "state": "online", 00:11:03.409 "raid_level": "raid0", 00:11:03.409 "superblock": true, 00:11:03.409 "num_base_bdevs": 4, 00:11:03.409 "num_base_bdevs_discovered": 4, 00:11:03.409 "num_base_bdevs_operational": 4, 00:11:03.409 "base_bdevs_list": [ 00:11:03.409 { 00:11:03.409 "name": "BaseBdev1", 00:11:03.409 "uuid": "c60cc9cc-2b7f-5efd-ad43-3760a01c8eb1", 00:11:03.409 "is_configured": true, 00:11:03.409 "data_offset": 2048, 00:11:03.409 "data_size": 63488 00:11:03.409 }, 00:11:03.409 { 00:11:03.409 "name": "BaseBdev2", 00:11:03.409 "uuid": "5117a131-cd31-561e-afb4-638280994040", 00:11:03.409 "is_configured": true, 00:11:03.409 "data_offset": 2048, 00:11:03.409 "data_size": 63488 00:11:03.409 }, 00:11:03.409 { 00:11:03.409 "name": "BaseBdev3", 00:11:03.409 "uuid": "c116384f-8fda-5d92-b6a7-1516645e8eb1", 00:11:03.409 "is_configured": true, 00:11:03.409 "data_offset": 2048, 00:11:03.409 "data_size": 63488 00:11:03.409 }, 00:11:03.409 { 00:11:03.409 "name": "BaseBdev4", 00:11:03.409 "uuid": "0fda9307-e3ef-5847-aa33-3ff3120ccc24", 00:11:03.409 "is_configured": true, 00:11:03.409 "data_offset": 2048, 00:11:03.409 "data_size": 63488 00:11:03.409 } 00:11:03.409 ] 00:11:03.409 }' 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.409 17:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.977 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:03.977 17:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:03.977 [2024-11-26 17:55:45.756656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:04.916 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:04.916 17:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.916 17:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.916 17:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.916 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:04.916 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:04.916 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:04.916 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:04.916 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.916 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.916 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.916 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.916 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.916 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.917 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.917 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.917 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.917 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.917 17:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.917 17:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.917 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.917 17:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.917 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.917 "name": "raid_bdev1", 00:11:04.917 "uuid": "c0131f26-6931-439d-a438-2b5a67dc6d8d", 00:11:04.917 "strip_size_kb": 64, 00:11:04.917 "state": "online", 00:11:04.917 "raid_level": "raid0", 00:11:04.917 "superblock": true, 00:11:04.917 "num_base_bdevs": 4, 00:11:04.917 "num_base_bdevs_discovered": 4, 00:11:04.917 "num_base_bdevs_operational": 4, 00:11:04.917 "base_bdevs_list": [ 00:11:04.917 { 00:11:04.917 "name": "BaseBdev1", 00:11:04.917 "uuid": "c60cc9cc-2b7f-5efd-ad43-3760a01c8eb1", 00:11:04.917 "is_configured": true, 00:11:04.917 "data_offset": 2048, 00:11:04.917 "data_size": 63488 00:11:04.917 }, 00:11:04.917 { 00:11:04.917 "name": "BaseBdev2", 00:11:04.917 "uuid": "5117a131-cd31-561e-afb4-638280994040", 00:11:04.917 "is_configured": true, 00:11:04.917 "data_offset": 2048, 00:11:04.917 "data_size": 63488 00:11:04.917 }, 00:11:04.917 { 00:11:04.917 "name": "BaseBdev3", 00:11:04.917 "uuid": "c116384f-8fda-5d92-b6a7-1516645e8eb1", 00:11:04.917 "is_configured": true, 00:11:04.917 "data_offset": 2048, 00:11:04.917 "data_size": 63488 00:11:04.917 }, 00:11:04.917 { 00:11:04.917 "name": "BaseBdev4", 00:11:04.917 "uuid": "0fda9307-e3ef-5847-aa33-3ff3120ccc24", 00:11:04.917 "is_configured": true, 00:11:04.917 "data_offset": 2048, 00:11:04.917 "data_size": 63488 00:11:04.917 } 00:11:04.917 ] 00:11:04.917 }' 00:11:04.917 17:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.917 17:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.485 17:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:05.485 17:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.485 17:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.485 [2024-11-26 17:55:47.113816] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.485 [2024-11-26 17:55:47.113917] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.485 [2024-11-26 17:55:47.117288] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.485 [2024-11-26 17:55:47.117406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.485 [2024-11-26 17:55:47.117492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.486 [2024-11-26 17:55:47.117547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:05.486 { 00:11:05.486 "results": [ 00:11:05.486 { 00:11:05.486 "job": "raid_bdev1", 00:11:05.486 "core_mask": "0x1", 00:11:05.486 "workload": "randrw", 00:11:05.486 "percentage": 50, 00:11:05.486 "status": "finished", 00:11:05.486 "queue_depth": 1, 00:11:05.486 "io_size": 131072, 00:11:05.486 "runtime": 1.357791, 00:11:05.486 "iops": 12635.228838606236, 00:11:05.486 "mibps": 1579.4036048257794, 00:11:05.486 "io_failed": 1, 00:11:05.486 "io_timeout": 0, 00:11:05.486 "avg_latency_us": 109.22485659665564, 00:11:05.486 "min_latency_us": 31.972052401746726, 00:11:05.486 "max_latency_us": 1745.7187772925763 00:11:05.486 } 00:11:05.486 ], 00:11:05.486 "core_count": 1 00:11:05.486 } 00:11:05.486 17:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.486 17:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71251 00:11:05.486 17:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71251 ']' 00:11:05.486 17:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71251 00:11:05.486 17:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:05.486 17:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.486 17:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71251 00:11:05.486 killing process with pid 71251 00:11:05.486 17:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.486 17:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.486 17:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71251' 00:11:05.486 17:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71251 00:11:05.486 [2024-11-26 17:55:47.163926] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.486 17:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71251 00:11:05.754 [2024-11-26 17:55:47.564855] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.659 17:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.W4qLcnHf2a 00:11:07.659 17:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:07.659 17:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:07.659 17:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:07.659 17:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:07.659 17:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:07.659 17:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:07.659 ************************************ 00:11:07.659 END TEST raid_read_error_test 00:11:07.659 ************************************ 00:11:07.659 17:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:07.659 00:11:07.659 real 0m5.201s 00:11:07.659 user 0m6.132s 00:11:07.659 sys 0m0.627s 00:11:07.659 17:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.659 17:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.659 17:55:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:07.659 17:55:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:07.659 17:55:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.659 17:55:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.659 ************************************ 00:11:07.659 START TEST raid_write_error_test 00:11:07.659 ************************************ 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:07.659 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:07.660 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:07.660 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:07.660 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:07.660 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2mZt428TCt 00:11:07.660 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71402 00:11:07.660 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:07.660 17:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71402 00:11:07.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.660 17:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71402 ']' 00:11:07.660 17:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.660 17:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.660 17:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.660 17:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.660 17:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.660 [2024-11-26 17:55:49.199590] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:11:07.660 [2024-11-26 17:55:49.199825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71402 ] 00:11:07.660 [2024-11-26 17:55:49.380427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.660 [2024-11-26 17:55:49.518803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.918 [2024-11-26 17:55:49.768916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.918 [2024-11-26 17:55:49.769034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.487 BaseBdev1_malloc 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.487 true 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.487 [2024-11-26 17:55:50.194058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:08.487 [2024-11-26 17:55:50.194201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.487 [2024-11-26 17:55:50.194277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:08.487 [2024-11-26 17:55:50.194324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.487 [2024-11-26 17:55:50.197187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.487 [2024-11-26 17:55:50.197273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:08.487 BaseBdev1 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.487 BaseBdev2_malloc 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.487 true 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.487 [2024-11-26 17:55:50.268501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:08.487 [2024-11-26 17:55:50.268663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.487 [2024-11-26 17:55:50.268717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:08.487 [2024-11-26 17:55:50.268761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.487 [2024-11-26 17:55:50.271368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.487 [2024-11-26 17:55:50.271470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:08.487 BaseBdev2 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.487 BaseBdev3_malloc 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.487 true 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:08.487 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.747 [2024-11-26 17:55:50.353613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:08.747 [2024-11-26 17:55:50.353696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.747 [2024-11-26 17:55:50.353724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:08.747 [2024-11-26 17:55:50.353737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.747 [2024-11-26 17:55:50.356388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.747 [2024-11-26 17:55:50.356441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:08.747 BaseBdev3 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.747 BaseBdev4_malloc 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.747 true 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.747 [2024-11-26 17:55:50.431894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:08.747 [2024-11-26 17:55:50.431960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.747 [2024-11-26 17:55:50.431982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:08.747 [2024-11-26 17:55:50.431995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.747 [2024-11-26 17:55:50.434506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.747 [2024-11-26 17:55:50.434553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:08.747 BaseBdev4 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.747 [2024-11-26 17:55:50.443944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.747 [2024-11-26 17:55:50.446083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.747 [2024-11-26 17:55:50.446215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.747 [2024-11-26 17:55:50.446328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:08.747 [2024-11-26 17:55:50.446615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:08.747 [2024-11-26 17:55:50.446675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:08.747 [2024-11-26 17:55:50.446988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:08.747 [2024-11-26 17:55:50.447251] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:08.747 [2024-11-26 17:55:50.447300] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:08.747 [2024-11-26 17:55:50.447531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.747 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.748 "name": "raid_bdev1", 00:11:08.748 "uuid": "49e6b6dc-8b77-438a-aabe-61d67a80b1a5", 00:11:08.748 "strip_size_kb": 64, 00:11:08.748 "state": "online", 00:11:08.748 "raid_level": "raid0", 00:11:08.748 "superblock": true, 00:11:08.748 "num_base_bdevs": 4, 00:11:08.748 "num_base_bdevs_discovered": 4, 00:11:08.748 "num_base_bdevs_operational": 4, 00:11:08.748 "base_bdevs_list": [ 00:11:08.748 { 00:11:08.748 "name": "BaseBdev1", 00:11:08.748 "uuid": "dc58d906-19cd-59ed-b6a6-53df661322b0", 00:11:08.748 "is_configured": true, 00:11:08.748 "data_offset": 2048, 00:11:08.748 "data_size": 63488 00:11:08.748 }, 00:11:08.748 { 00:11:08.748 "name": "BaseBdev2", 00:11:08.748 "uuid": "6d58c73b-96a0-5375-82a0-437d5055c5a9", 00:11:08.748 "is_configured": true, 00:11:08.748 "data_offset": 2048, 00:11:08.748 "data_size": 63488 00:11:08.748 }, 00:11:08.748 { 00:11:08.748 "name": "BaseBdev3", 00:11:08.748 "uuid": "d46615a3-1216-554d-a330-d2588f974e42", 00:11:08.748 "is_configured": true, 00:11:08.748 "data_offset": 2048, 00:11:08.748 "data_size": 63488 00:11:08.748 }, 00:11:08.748 { 00:11:08.748 "name": "BaseBdev4", 00:11:08.748 "uuid": "265882d5-b3e3-55b3-90d7-b4fc933c304f", 00:11:08.748 "is_configured": true, 00:11:08.748 "data_offset": 2048, 00:11:08.748 "data_size": 63488 00:11:08.748 } 00:11:08.748 ] 00:11:08.748 }' 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.748 17:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.317 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:09.317 17:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:09.317 [2024-11-26 17:55:51.060568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.254 17:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.254 17:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.254 "name": "raid_bdev1", 00:11:10.254 "uuid": "49e6b6dc-8b77-438a-aabe-61d67a80b1a5", 00:11:10.254 "strip_size_kb": 64, 00:11:10.254 "state": "online", 00:11:10.254 "raid_level": "raid0", 00:11:10.254 "superblock": true, 00:11:10.254 "num_base_bdevs": 4, 00:11:10.254 "num_base_bdevs_discovered": 4, 00:11:10.254 "num_base_bdevs_operational": 4, 00:11:10.254 "base_bdevs_list": [ 00:11:10.254 { 00:11:10.254 "name": "BaseBdev1", 00:11:10.255 "uuid": "dc58d906-19cd-59ed-b6a6-53df661322b0", 00:11:10.255 "is_configured": true, 00:11:10.255 "data_offset": 2048, 00:11:10.255 "data_size": 63488 00:11:10.255 }, 00:11:10.255 { 00:11:10.255 "name": "BaseBdev2", 00:11:10.255 "uuid": "6d58c73b-96a0-5375-82a0-437d5055c5a9", 00:11:10.255 "is_configured": true, 00:11:10.255 "data_offset": 2048, 00:11:10.255 "data_size": 63488 00:11:10.255 }, 00:11:10.255 { 00:11:10.255 "name": "BaseBdev3", 00:11:10.255 "uuid": "d46615a3-1216-554d-a330-d2588f974e42", 00:11:10.255 "is_configured": true, 00:11:10.255 "data_offset": 2048, 00:11:10.255 "data_size": 63488 00:11:10.255 }, 00:11:10.255 { 00:11:10.255 "name": "BaseBdev4", 00:11:10.255 "uuid": "265882d5-b3e3-55b3-90d7-b4fc933c304f", 00:11:10.255 "is_configured": true, 00:11:10.255 "data_offset": 2048, 00:11:10.255 "data_size": 63488 00:11:10.255 } 00:11:10.255 ] 00:11:10.255 }' 00:11:10.255 17:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.255 17:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.823 17:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:10.823 17:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.823 17:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.823 [2024-11-26 17:55:52.405925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.823 [2024-11-26 17:55:52.406068] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.823 [2024-11-26 17:55:52.409951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.823 [2024-11-26 17:55:52.410184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.823 [2024-11-26 17:55:52.410330] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.823 [2024-11-26 17:55:52.410433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:10.823 { 00:11:10.823 "results": [ 00:11:10.823 { 00:11:10.823 "job": "raid_bdev1", 00:11:10.823 "core_mask": "0x1", 00:11:10.823 "workload": "randrw", 00:11:10.823 "percentage": 50, 00:11:10.823 "status": "finished", 00:11:10.823 "queue_depth": 1, 00:11:10.823 "io_size": 131072, 00:11:10.823 "runtime": 1.34601, 00:11:10.823 "iops": 12739.875632424722, 00:11:10.823 "mibps": 1592.4844540530903, 00:11:10.823 "io_failed": 1, 00:11:10.823 "io_timeout": 0, 00:11:10.823 "avg_latency_us": 108.33826082771577, 00:11:10.823 "min_latency_us": 34.20786026200874, 00:11:10.823 "max_latency_us": 1752.8733624454148 00:11:10.823 } 00:11:10.823 ], 00:11:10.823 "core_count": 1 00:11:10.823 } 00:11:10.823 17:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.823 17:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71402 00:11:10.823 17:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71402 ']' 00:11:10.823 17:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71402 00:11:10.823 17:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:10.823 17:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.823 17:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71402 00:11:10.823 killing process with pid 71402 00:11:10.823 17:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.823 17:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.823 17:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71402' 00:11:10.823 17:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71402 00:11:10.823 [2024-11-26 17:55:52.457197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.823 17:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71402 00:11:11.082 [2024-11-26 17:55:52.861140] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.460 17:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:12.460 17:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2mZt428TCt 00:11:12.460 17:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:12.722 17:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:12.722 17:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:12.722 17:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.722 17:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:12.722 ************************************ 00:11:12.722 END TEST raid_write_error_test 00:11:12.722 ************************************ 00:11:12.722 17:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:12.722 00:11:12.722 real 0m5.248s 00:11:12.722 user 0m6.215s 00:11:12.722 sys 0m0.620s 00:11:12.722 17:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.722 17:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.722 17:55:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:12.722 17:55:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:12.722 17:55:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:12.722 17:55:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.722 17:55:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.722 ************************************ 00:11:12.722 START TEST raid_state_function_test 00:11:12.722 ************************************ 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:12.722 Process raid pid: 71552 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71552 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71552' 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71552 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71552 ']' 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.722 17:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.722 [2024-11-26 17:55:54.506134] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:11:12.722 [2024-11-26 17:55:54.506356] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.986 [2024-11-26 17:55:54.687623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.986 [2024-11-26 17:55:54.827155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.244 [2024-11-26 17:55:55.075914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.244 [2024-11-26 17:55:55.075958] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.810 [2024-11-26 17:55:55.441687] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.810 [2024-11-26 17:55:55.441799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.810 [2024-11-26 17:55:55.441845] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.810 [2024-11-26 17:55:55.441876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.810 [2024-11-26 17:55:55.441937] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.810 [2024-11-26 17:55:55.441965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.810 [2024-11-26 17:55:55.442014] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.810 [2024-11-26 17:55:55.442061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.810 "name": "Existed_Raid", 00:11:13.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.810 "strip_size_kb": 64, 00:11:13.810 "state": "configuring", 00:11:13.810 "raid_level": "concat", 00:11:13.810 "superblock": false, 00:11:13.810 "num_base_bdevs": 4, 00:11:13.810 "num_base_bdevs_discovered": 0, 00:11:13.810 "num_base_bdevs_operational": 4, 00:11:13.810 "base_bdevs_list": [ 00:11:13.810 { 00:11:13.810 "name": "BaseBdev1", 00:11:13.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.810 "is_configured": false, 00:11:13.810 "data_offset": 0, 00:11:13.810 "data_size": 0 00:11:13.810 }, 00:11:13.810 { 00:11:13.810 "name": "BaseBdev2", 00:11:13.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.810 "is_configured": false, 00:11:13.810 "data_offset": 0, 00:11:13.810 "data_size": 0 00:11:13.810 }, 00:11:13.810 { 00:11:13.810 "name": "BaseBdev3", 00:11:13.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.810 "is_configured": false, 00:11:13.810 "data_offset": 0, 00:11:13.810 "data_size": 0 00:11:13.810 }, 00:11:13.810 { 00:11:13.810 "name": "BaseBdev4", 00:11:13.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.810 "is_configured": false, 00:11:13.810 "data_offset": 0, 00:11:13.810 "data_size": 0 00:11:13.810 } 00:11:13.810 ] 00:11:13.810 }' 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.810 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.069 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.070 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.070 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.070 [2024-11-26 17:55:55.877232] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.070 [2024-11-26 17:55:55.877333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:14.070 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.070 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.070 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.070 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.070 [2024-11-26 17:55:55.889207] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.070 [2024-11-26 17:55:55.889258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.070 [2024-11-26 17:55:55.889275] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.070 [2024-11-26 17:55:55.889292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.070 [2024-11-26 17:55:55.889305] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.070 [2024-11-26 17:55:55.889322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.070 [2024-11-26 17:55:55.889334] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:14.070 [2024-11-26 17:55:55.889349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:14.070 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.070 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:14.070 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.070 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.377 [2024-11-26 17:55:55.944638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.377 BaseBdev1 00:11:14.377 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.377 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:14.377 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:14.377 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.377 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.377 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.377 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.377 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.377 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.377 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.377 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.377 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:14.377 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.377 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.377 [ 00:11:14.377 { 00:11:14.377 "name": "BaseBdev1", 00:11:14.377 "aliases": [ 00:11:14.377 "bf0296b4-6fe9-4317-80ff-dd90640940a7" 00:11:14.377 ], 00:11:14.377 "product_name": "Malloc disk", 00:11:14.377 "block_size": 512, 00:11:14.378 "num_blocks": 65536, 00:11:14.378 "uuid": "bf0296b4-6fe9-4317-80ff-dd90640940a7", 00:11:14.378 "assigned_rate_limits": { 00:11:14.378 "rw_ios_per_sec": 0, 00:11:14.378 "rw_mbytes_per_sec": 0, 00:11:14.378 "r_mbytes_per_sec": 0, 00:11:14.378 "w_mbytes_per_sec": 0 00:11:14.378 }, 00:11:14.378 "claimed": true, 00:11:14.378 "claim_type": "exclusive_write", 00:11:14.378 "zoned": false, 00:11:14.378 "supported_io_types": { 00:11:14.378 "read": true, 00:11:14.378 "write": true, 00:11:14.378 "unmap": true, 00:11:14.378 "flush": true, 00:11:14.378 "reset": true, 00:11:14.378 "nvme_admin": false, 00:11:14.378 "nvme_io": false, 00:11:14.378 "nvme_io_md": false, 00:11:14.378 "write_zeroes": true, 00:11:14.378 "zcopy": true, 00:11:14.378 "get_zone_info": false, 00:11:14.378 "zone_management": false, 00:11:14.378 "zone_append": false, 00:11:14.378 "compare": false, 00:11:14.378 "compare_and_write": false, 00:11:14.378 "abort": true, 00:11:14.378 "seek_hole": false, 00:11:14.378 "seek_data": false, 00:11:14.378 "copy": true, 00:11:14.378 "nvme_iov_md": false 00:11:14.378 }, 00:11:14.378 "memory_domains": [ 00:11:14.378 { 00:11:14.378 "dma_device_id": "system", 00:11:14.378 "dma_device_type": 1 00:11:14.378 }, 00:11:14.378 { 00:11:14.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.378 "dma_device_type": 2 00:11:14.378 } 00:11:14.378 ], 00:11:14.378 "driver_specific": {} 00:11:14.378 } 00:11:14.378 ] 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.378 17:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.378 17:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.378 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.378 "name": "Existed_Raid", 00:11:14.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.378 "strip_size_kb": 64, 00:11:14.378 "state": "configuring", 00:11:14.378 "raid_level": "concat", 00:11:14.378 "superblock": false, 00:11:14.378 "num_base_bdevs": 4, 00:11:14.378 "num_base_bdevs_discovered": 1, 00:11:14.378 "num_base_bdevs_operational": 4, 00:11:14.378 "base_bdevs_list": [ 00:11:14.378 { 00:11:14.378 "name": "BaseBdev1", 00:11:14.378 "uuid": "bf0296b4-6fe9-4317-80ff-dd90640940a7", 00:11:14.378 "is_configured": true, 00:11:14.378 "data_offset": 0, 00:11:14.378 "data_size": 65536 00:11:14.378 }, 00:11:14.378 { 00:11:14.378 "name": "BaseBdev2", 00:11:14.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.378 "is_configured": false, 00:11:14.378 "data_offset": 0, 00:11:14.378 "data_size": 0 00:11:14.378 }, 00:11:14.378 { 00:11:14.378 "name": "BaseBdev3", 00:11:14.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.378 "is_configured": false, 00:11:14.378 "data_offset": 0, 00:11:14.378 "data_size": 0 00:11:14.378 }, 00:11:14.378 { 00:11:14.378 "name": "BaseBdev4", 00:11:14.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.378 "is_configured": false, 00:11:14.378 "data_offset": 0, 00:11:14.378 "data_size": 0 00:11:14.378 } 00:11:14.378 ] 00:11:14.378 }' 00:11:14.378 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.378 17:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.945 [2024-11-26 17:55:56.512063] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.945 [2024-11-26 17:55:56.512174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.945 [2024-11-26 17:55:56.524134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.945 [2024-11-26 17:55:56.526416] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.945 [2024-11-26 17:55:56.526509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.945 [2024-11-26 17:55:56.526545] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.945 [2024-11-26 17:55:56.526577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.945 [2024-11-26 17:55:56.526609] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:14.945 [2024-11-26 17:55:56.526636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.945 "name": "Existed_Raid", 00:11:14.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.945 "strip_size_kb": 64, 00:11:14.945 "state": "configuring", 00:11:14.945 "raid_level": "concat", 00:11:14.945 "superblock": false, 00:11:14.945 "num_base_bdevs": 4, 00:11:14.945 "num_base_bdevs_discovered": 1, 00:11:14.945 "num_base_bdevs_operational": 4, 00:11:14.945 "base_bdevs_list": [ 00:11:14.945 { 00:11:14.945 "name": "BaseBdev1", 00:11:14.945 "uuid": "bf0296b4-6fe9-4317-80ff-dd90640940a7", 00:11:14.945 "is_configured": true, 00:11:14.945 "data_offset": 0, 00:11:14.945 "data_size": 65536 00:11:14.945 }, 00:11:14.945 { 00:11:14.945 "name": "BaseBdev2", 00:11:14.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.945 "is_configured": false, 00:11:14.945 "data_offset": 0, 00:11:14.945 "data_size": 0 00:11:14.945 }, 00:11:14.945 { 00:11:14.945 "name": "BaseBdev3", 00:11:14.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.945 "is_configured": false, 00:11:14.945 "data_offset": 0, 00:11:14.945 "data_size": 0 00:11:14.945 }, 00:11:14.945 { 00:11:14.945 "name": "BaseBdev4", 00:11:14.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.945 "is_configured": false, 00:11:14.945 "data_offset": 0, 00:11:14.945 "data_size": 0 00:11:14.945 } 00:11:14.945 ] 00:11:14.945 }' 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.945 17:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.204 17:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:15.204 17:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.204 17:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.204 [2024-11-26 17:55:57.044804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.204 BaseBdev2 00:11:15.204 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.204 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:15.204 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:15.204 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.204 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.204 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.204 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.204 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.204 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.204 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.204 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.204 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:15.204 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.204 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.463 [ 00:11:15.463 { 00:11:15.463 "name": "BaseBdev2", 00:11:15.463 "aliases": [ 00:11:15.463 "0d2a1eed-86ed-4976-b597-943cf5c3b23d" 00:11:15.463 ], 00:11:15.463 "product_name": "Malloc disk", 00:11:15.463 "block_size": 512, 00:11:15.463 "num_blocks": 65536, 00:11:15.463 "uuid": "0d2a1eed-86ed-4976-b597-943cf5c3b23d", 00:11:15.463 "assigned_rate_limits": { 00:11:15.463 "rw_ios_per_sec": 0, 00:11:15.463 "rw_mbytes_per_sec": 0, 00:11:15.463 "r_mbytes_per_sec": 0, 00:11:15.463 "w_mbytes_per_sec": 0 00:11:15.463 }, 00:11:15.463 "claimed": true, 00:11:15.463 "claim_type": "exclusive_write", 00:11:15.463 "zoned": false, 00:11:15.463 "supported_io_types": { 00:11:15.463 "read": true, 00:11:15.463 "write": true, 00:11:15.463 "unmap": true, 00:11:15.463 "flush": true, 00:11:15.463 "reset": true, 00:11:15.463 "nvme_admin": false, 00:11:15.463 "nvme_io": false, 00:11:15.463 "nvme_io_md": false, 00:11:15.463 "write_zeroes": true, 00:11:15.463 "zcopy": true, 00:11:15.463 "get_zone_info": false, 00:11:15.463 "zone_management": false, 00:11:15.463 "zone_append": false, 00:11:15.463 "compare": false, 00:11:15.463 "compare_and_write": false, 00:11:15.463 "abort": true, 00:11:15.463 "seek_hole": false, 00:11:15.463 "seek_data": false, 00:11:15.463 "copy": true, 00:11:15.463 "nvme_iov_md": false 00:11:15.463 }, 00:11:15.463 "memory_domains": [ 00:11:15.463 { 00:11:15.463 "dma_device_id": "system", 00:11:15.463 "dma_device_type": 1 00:11:15.463 }, 00:11:15.463 { 00:11:15.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.463 "dma_device_type": 2 00:11:15.463 } 00:11:15.463 ], 00:11:15.463 "driver_specific": {} 00:11:15.463 } 00:11:15.463 ] 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.463 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.463 "name": "Existed_Raid", 00:11:15.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.463 "strip_size_kb": 64, 00:11:15.463 "state": "configuring", 00:11:15.463 "raid_level": "concat", 00:11:15.463 "superblock": false, 00:11:15.463 "num_base_bdevs": 4, 00:11:15.463 "num_base_bdevs_discovered": 2, 00:11:15.463 "num_base_bdevs_operational": 4, 00:11:15.463 "base_bdevs_list": [ 00:11:15.463 { 00:11:15.463 "name": "BaseBdev1", 00:11:15.463 "uuid": "bf0296b4-6fe9-4317-80ff-dd90640940a7", 00:11:15.463 "is_configured": true, 00:11:15.463 "data_offset": 0, 00:11:15.463 "data_size": 65536 00:11:15.463 }, 00:11:15.463 { 00:11:15.463 "name": "BaseBdev2", 00:11:15.463 "uuid": "0d2a1eed-86ed-4976-b597-943cf5c3b23d", 00:11:15.463 "is_configured": true, 00:11:15.463 "data_offset": 0, 00:11:15.463 "data_size": 65536 00:11:15.463 }, 00:11:15.464 { 00:11:15.464 "name": "BaseBdev3", 00:11:15.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.464 "is_configured": false, 00:11:15.464 "data_offset": 0, 00:11:15.464 "data_size": 0 00:11:15.464 }, 00:11:15.464 { 00:11:15.464 "name": "BaseBdev4", 00:11:15.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.464 "is_configured": false, 00:11:15.464 "data_offset": 0, 00:11:15.464 "data_size": 0 00:11:15.464 } 00:11:15.464 ] 00:11:15.464 }' 00:11:15.464 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.464 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.723 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:15.723 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.723 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.983 [2024-11-26 17:55:57.609214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.983 BaseBdev3 00:11:15.983 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.983 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:15.983 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:15.983 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.983 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.983 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.983 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.983 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.983 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.983 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.983 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.983 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:15.983 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.983 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.983 [ 00:11:15.983 { 00:11:15.983 "name": "BaseBdev3", 00:11:15.983 "aliases": [ 00:11:15.983 "bf75b3d3-d120-479b-a0f7-0682efd47644" 00:11:15.983 ], 00:11:15.983 "product_name": "Malloc disk", 00:11:15.983 "block_size": 512, 00:11:15.983 "num_blocks": 65536, 00:11:15.983 "uuid": "bf75b3d3-d120-479b-a0f7-0682efd47644", 00:11:15.983 "assigned_rate_limits": { 00:11:15.983 "rw_ios_per_sec": 0, 00:11:15.983 "rw_mbytes_per_sec": 0, 00:11:15.983 "r_mbytes_per_sec": 0, 00:11:15.984 "w_mbytes_per_sec": 0 00:11:15.984 }, 00:11:15.984 "claimed": true, 00:11:15.984 "claim_type": "exclusive_write", 00:11:15.984 "zoned": false, 00:11:15.984 "supported_io_types": { 00:11:15.984 "read": true, 00:11:15.984 "write": true, 00:11:15.984 "unmap": true, 00:11:15.984 "flush": true, 00:11:15.984 "reset": true, 00:11:15.984 "nvme_admin": false, 00:11:15.984 "nvme_io": false, 00:11:15.984 "nvme_io_md": false, 00:11:15.984 "write_zeroes": true, 00:11:15.984 "zcopy": true, 00:11:15.984 "get_zone_info": false, 00:11:15.984 "zone_management": false, 00:11:15.984 "zone_append": false, 00:11:15.984 "compare": false, 00:11:15.984 "compare_and_write": false, 00:11:15.984 "abort": true, 00:11:15.984 "seek_hole": false, 00:11:15.984 "seek_data": false, 00:11:15.984 "copy": true, 00:11:15.984 "nvme_iov_md": false 00:11:15.984 }, 00:11:15.984 "memory_domains": [ 00:11:15.984 { 00:11:15.984 "dma_device_id": "system", 00:11:15.984 "dma_device_type": 1 00:11:15.984 }, 00:11:15.984 { 00:11:15.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.984 "dma_device_type": 2 00:11:15.984 } 00:11:15.984 ], 00:11:15.984 "driver_specific": {} 00:11:15.984 } 00:11:15.984 ] 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.984 "name": "Existed_Raid", 00:11:15.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.984 "strip_size_kb": 64, 00:11:15.984 "state": "configuring", 00:11:15.984 "raid_level": "concat", 00:11:15.984 "superblock": false, 00:11:15.984 "num_base_bdevs": 4, 00:11:15.984 "num_base_bdevs_discovered": 3, 00:11:15.984 "num_base_bdevs_operational": 4, 00:11:15.984 "base_bdevs_list": [ 00:11:15.984 { 00:11:15.984 "name": "BaseBdev1", 00:11:15.984 "uuid": "bf0296b4-6fe9-4317-80ff-dd90640940a7", 00:11:15.984 "is_configured": true, 00:11:15.984 "data_offset": 0, 00:11:15.984 "data_size": 65536 00:11:15.984 }, 00:11:15.984 { 00:11:15.984 "name": "BaseBdev2", 00:11:15.984 "uuid": "0d2a1eed-86ed-4976-b597-943cf5c3b23d", 00:11:15.984 "is_configured": true, 00:11:15.984 "data_offset": 0, 00:11:15.984 "data_size": 65536 00:11:15.984 }, 00:11:15.984 { 00:11:15.984 "name": "BaseBdev3", 00:11:15.984 "uuid": "bf75b3d3-d120-479b-a0f7-0682efd47644", 00:11:15.984 "is_configured": true, 00:11:15.984 "data_offset": 0, 00:11:15.984 "data_size": 65536 00:11:15.984 }, 00:11:15.984 { 00:11:15.984 "name": "BaseBdev4", 00:11:15.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.984 "is_configured": false, 00:11:15.984 "data_offset": 0, 00:11:15.984 "data_size": 0 00:11:15.984 } 00:11:15.984 ] 00:11:15.984 }' 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.984 17:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.553 [2024-11-26 17:55:58.194095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:16.553 [2024-11-26 17:55:58.194278] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:16.553 [2024-11-26 17:55:58.194297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:16.553 [2024-11-26 17:55:58.194695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:16.553 [2024-11-26 17:55:58.194936] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:16.553 [2024-11-26 17:55:58.194953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:16.553 [2024-11-26 17:55:58.195309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.553 BaseBdev4 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.553 [ 00:11:16.553 { 00:11:16.553 "name": "BaseBdev4", 00:11:16.553 "aliases": [ 00:11:16.553 "9ee6dfe6-e3b3-46d3-a85f-f187089691ca" 00:11:16.553 ], 00:11:16.553 "product_name": "Malloc disk", 00:11:16.553 "block_size": 512, 00:11:16.553 "num_blocks": 65536, 00:11:16.553 "uuid": "9ee6dfe6-e3b3-46d3-a85f-f187089691ca", 00:11:16.553 "assigned_rate_limits": { 00:11:16.553 "rw_ios_per_sec": 0, 00:11:16.553 "rw_mbytes_per_sec": 0, 00:11:16.553 "r_mbytes_per_sec": 0, 00:11:16.553 "w_mbytes_per_sec": 0 00:11:16.553 }, 00:11:16.553 "claimed": true, 00:11:16.553 "claim_type": "exclusive_write", 00:11:16.553 "zoned": false, 00:11:16.553 "supported_io_types": { 00:11:16.553 "read": true, 00:11:16.553 "write": true, 00:11:16.553 "unmap": true, 00:11:16.553 "flush": true, 00:11:16.553 "reset": true, 00:11:16.553 "nvme_admin": false, 00:11:16.553 "nvme_io": false, 00:11:16.553 "nvme_io_md": false, 00:11:16.553 "write_zeroes": true, 00:11:16.553 "zcopy": true, 00:11:16.553 "get_zone_info": false, 00:11:16.553 "zone_management": false, 00:11:16.553 "zone_append": false, 00:11:16.553 "compare": false, 00:11:16.553 "compare_and_write": false, 00:11:16.553 "abort": true, 00:11:16.553 "seek_hole": false, 00:11:16.553 "seek_data": false, 00:11:16.553 "copy": true, 00:11:16.553 "nvme_iov_md": false 00:11:16.553 }, 00:11:16.553 "memory_domains": [ 00:11:16.553 { 00:11:16.553 "dma_device_id": "system", 00:11:16.553 "dma_device_type": 1 00:11:16.553 }, 00:11:16.553 { 00:11:16.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.553 "dma_device_type": 2 00:11:16.553 } 00:11:16.553 ], 00:11:16.553 "driver_specific": {} 00:11:16.553 } 00:11:16.553 ] 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.553 "name": "Existed_Raid", 00:11:16.553 "uuid": "1d2fbfe4-d3e4-4351-9c05-0f930841ea0e", 00:11:16.553 "strip_size_kb": 64, 00:11:16.553 "state": "online", 00:11:16.553 "raid_level": "concat", 00:11:16.553 "superblock": false, 00:11:16.553 "num_base_bdevs": 4, 00:11:16.553 "num_base_bdevs_discovered": 4, 00:11:16.553 "num_base_bdevs_operational": 4, 00:11:16.553 "base_bdevs_list": [ 00:11:16.553 { 00:11:16.553 "name": "BaseBdev1", 00:11:16.553 "uuid": "bf0296b4-6fe9-4317-80ff-dd90640940a7", 00:11:16.553 "is_configured": true, 00:11:16.553 "data_offset": 0, 00:11:16.553 "data_size": 65536 00:11:16.553 }, 00:11:16.553 { 00:11:16.553 "name": "BaseBdev2", 00:11:16.553 "uuid": "0d2a1eed-86ed-4976-b597-943cf5c3b23d", 00:11:16.553 "is_configured": true, 00:11:16.553 "data_offset": 0, 00:11:16.553 "data_size": 65536 00:11:16.553 }, 00:11:16.553 { 00:11:16.553 "name": "BaseBdev3", 00:11:16.553 "uuid": "bf75b3d3-d120-479b-a0f7-0682efd47644", 00:11:16.553 "is_configured": true, 00:11:16.553 "data_offset": 0, 00:11:16.553 "data_size": 65536 00:11:16.553 }, 00:11:16.553 { 00:11:16.553 "name": "BaseBdev4", 00:11:16.553 "uuid": "9ee6dfe6-e3b3-46d3-a85f-f187089691ca", 00:11:16.553 "is_configured": true, 00:11:16.553 "data_offset": 0, 00:11:16.553 "data_size": 65536 00:11:16.553 } 00:11:16.553 ] 00:11:16.553 }' 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.553 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.122 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:17.122 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:17.122 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.122 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.122 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.122 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.122 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:17.122 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.122 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.123 [2024-11-26 17:55:58.705706] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:17.123 "name": "Existed_Raid", 00:11:17.123 "aliases": [ 00:11:17.123 "1d2fbfe4-d3e4-4351-9c05-0f930841ea0e" 00:11:17.123 ], 00:11:17.123 "product_name": "Raid Volume", 00:11:17.123 "block_size": 512, 00:11:17.123 "num_blocks": 262144, 00:11:17.123 "uuid": "1d2fbfe4-d3e4-4351-9c05-0f930841ea0e", 00:11:17.123 "assigned_rate_limits": { 00:11:17.123 "rw_ios_per_sec": 0, 00:11:17.123 "rw_mbytes_per_sec": 0, 00:11:17.123 "r_mbytes_per_sec": 0, 00:11:17.123 "w_mbytes_per_sec": 0 00:11:17.123 }, 00:11:17.123 "claimed": false, 00:11:17.123 "zoned": false, 00:11:17.123 "supported_io_types": { 00:11:17.123 "read": true, 00:11:17.123 "write": true, 00:11:17.123 "unmap": true, 00:11:17.123 "flush": true, 00:11:17.123 "reset": true, 00:11:17.123 "nvme_admin": false, 00:11:17.123 "nvme_io": false, 00:11:17.123 "nvme_io_md": false, 00:11:17.123 "write_zeroes": true, 00:11:17.123 "zcopy": false, 00:11:17.123 "get_zone_info": false, 00:11:17.123 "zone_management": false, 00:11:17.123 "zone_append": false, 00:11:17.123 "compare": false, 00:11:17.123 "compare_and_write": false, 00:11:17.123 "abort": false, 00:11:17.123 "seek_hole": false, 00:11:17.123 "seek_data": false, 00:11:17.123 "copy": false, 00:11:17.123 "nvme_iov_md": false 00:11:17.123 }, 00:11:17.123 "memory_domains": [ 00:11:17.123 { 00:11:17.123 "dma_device_id": "system", 00:11:17.123 "dma_device_type": 1 00:11:17.123 }, 00:11:17.123 { 00:11:17.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.123 "dma_device_type": 2 00:11:17.123 }, 00:11:17.123 { 00:11:17.123 "dma_device_id": "system", 00:11:17.123 "dma_device_type": 1 00:11:17.123 }, 00:11:17.123 { 00:11:17.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.123 "dma_device_type": 2 00:11:17.123 }, 00:11:17.123 { 00:11:17.123 "dma_device_id": "system", 00:11:17.123 "dma_device_type": 1 00:11:17.123 }, 00:11:17.123 { 00:11:17.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.123 "dma_device_type": 2 00:11:17.123 }, 00:11:17.123 { 00:11:17.123 "dma_device_id": "system", 00:11:17.123 "dma_device_type": 1 00:11:17.123 }, 00:11:17.123 { 00:11:17.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.123 "dma_device_type": 2 00:11:17.123 } 00:11:17.123 ], 00:11:17.123 "driver_specific": { 00:11:17.123 "raid": { 00:11:17.123 "uuid": "1d2fbfe4-d3e4-4351-9c05-0f930841ea0e", 00:11:17.123 "strip_size_kb": 64, 00:11:17.123 "state": "online", 00:11:17.123 "raid_level": "concat", 00:11:17.123 "superblock": false, 00:11:17.123 "num_base_bdevs": 4, 00:11:17.123 "num_base_bdevs_discovered": 4, 00:11:17.123 "num_base_bdevs_operational": 4, 00:11:17.123 "base_bdevs_list": [ 00:11:17.123 { 00:11:17.123 "name": "BaseBdev1", 00:11:17.123 "uuid": "bf0296b4-6fe9-4317-80ff-dd90640940a7", 00:11:17.123 "is_configured": true, 00:11:17.123 "data_offset": 0, 00:11:17.123 "data_size": 65536 00:11:17.123 }, 00:11:17.123 { 00:11:17.123 "name": "BaseBdev2", 00:11:17.123 "uuid": "0d2a1eed-86ed-4976-b597-943cf5c3b23d", 00:11:17.123 "is_configured": true, 00:11:17.123 "data_offset": 0, 00:11:17.123 "data_size": 65536 00:11:17.123 }, 00:11:17.123 { 00:11:17.123 "name": "BaseBdev3", 00:11:17.123 "uuid": "bf75b3d3-d120-479b-a0f7-0682efd47644", 00:11:17.123 "is_configured": true, 00:11:17.123 "data_offset": 0, 00:11:17.123 "data_size": 65536 00:11:17.123 }, 00:11:17.123 { 00:11:17.123 "name": "BaseBdev4", 00:11:17.123 "uuid": "9ee6dfe6-e3b3-46d3-a85f-f187089691ca", 00:11:17.123 "is_configured": true, 00:11:17.123 "data_offset": 0, 00:11:17.123 "data_size": 65536 00:11:17.123 } 00:11:17.123 ] 00:11:17.123 } 00:11:17.123 } 00:11:17.123 }' 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:17.123 BaseBdev2 00:11:17.123 BaseBdev3 00:11:17.123 BaseBdev4' 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:17.123 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.124 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.384 17:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.384 [2024-11-26 17:55:59.077260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.384 [2024-11-26 17:55:59.077348] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.384 [2024-11-26 17:55:59.077441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.384 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.643 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.643 "name": "Existed_Raid", 00:11:17.643 "uuid": "1d2fbfe4-d3e4-4351-9c05-0f930841ea0e", 00:11:17.643 "strip_size_kb": 64, 00:11:17.643 "state": "offline", 00:11:17.643 "raid_level": "concat", 00:11:17.643 "superblock": false, 00:11:17.643 "num_base_bdevs": 4, 00:11:17.643 "num_base_bdevs_discovered": 3, 00:11:17.643 "num_base_bdevs_operational": 3, 00:11:17.643 "base_bdevs_list": [ 00:11:17.643 { 00:11:17.643 "name": null, 00:11:17.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.643 "is_configured": false, 00:11:17.643 "data_offset": 0, 00:11:17.643 "data_size": 65536 00:11:17.643 }, 00:11:17.643 { 00:11:17.643 "name": "BaseBdev2", 00:11:17.643 "uuid": "0d2a1eed-86ed-4976-b597-943cf5c3b23d", 00:11:17.643 "is_configured": true, 00:11:17.643 "data_offset": 0, 00:11:17.643 "data_size": 65536 00:11:17.643 }, 00:11:17.643 { 00:11:17.643 "name": "BaseBdev3", 00:11:17.643 "uuid": "bf75b3d3-d120-479b-a0f7-0682efd47644", 00:11:17.643 "is_configured": true, 00:11:17.643 "data_offset": 0, 00:11:17.643 "data_size": 65536 00:11:17.643 }, 00:11:17.643 { 00:11:17.643 "name": "BaseBdev4", 00:11:17.643 "uuid": "9ee6dfe6-e3b3-46d3-a85f-f187089691ca", 00:11:17.643 "is_configured": true, 00:11:17.643 "data_offset": 0, 00:11:17.643 "data_size": 65536 00:11:17.643 } 00:11:17.643 ] 00:11:17.643 }' 00:11:17.643 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.643 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.902 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:17.902 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.902 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.902 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.902 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.902 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.902 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.902 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.902 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.902 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:17.902 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.902 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.902 [2024-11-26 17:55:59.743250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:18.160 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.160 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.160 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.160 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.160 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.160 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.160 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.160 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.160 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.160 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.160 17:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:18.160 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.160 17:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.161 [2024-11-26 17:55:59.913578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.420 [2024-11-26 17:56:00.087823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:18.420 [2024-11-26 17:56:00.087934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.420 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.680 BaseBdev2 00:11:18.680 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.680 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:18.680 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:18.680 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.680 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.680 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.680 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.680 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.680 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.680 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.680 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.680 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:18.680 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.680 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.680 [ 00:11:18.680 { 00:11:18.680 "name": "BaseBdev2", 00:11:18.680 "aliases": [ 00:11:18.680 "19a678eb-6e97-4067-8f99-79694505f328" 00:11:18.680 ], 00:11:18.680 "product_name": "Malloc disk", 00:11:18.680 "block_size": 512, 00:11:18.680 "num_blocks": 65536, 00:11:18.680 "uuid": "19a678eb-6e97-4067-8f99-79694505f328", 00:11:18.680 "assigned_rate_limits": { 00:11:18.680 "rw_ios_per_sec": 0, 00:11:18.680 "rw_mbytes_per_sec": 0, 00:11:18.680 "r_mbytes_per_sec": 0, 00:11:18.680 "w_mbytes_per_sec": 0 00:11:18.680 }, 00:11:18.680 "claimed": false, 00:11:18.680 "zoned": false, 00:11:18.680 "supported_io_types": { 00:11:18.680 "read": true, 00:11:18.681 "write": true, 00:11:18.681 "unmap": true, 00:11:18.681 "flush": true, 00:11:18.681 "reset": true, 00:11:18.681 "nvme_admin": false, 00:11:18.681 "nvme_io": false, 00:11:18.681 "nvme_io_md": false, 00:11:18.681 "write_zeroes": true, 00:11:18.681 "zcopy": true, 00:11:18.681 "get_zone_info": false, 00:11:18.681 "zone_management": false, 00:11:18.681 "zone_append": false, 00:11:18.681 "compare": false, 00:11:18.681 "compare_and_write": false, 00:11:18.681 "abort": true, 00:11:18.681 "seek_hole": false, 00:11:18.681 "seek_data": false, 00:11:18.681 "copy": true, 00:11:18.681 "nvme_iov_md": false 00:11:18.681 }, 00:11:18.681 "memory_domains": [ 00:11:18.681 { 00:11:18.681 "dma_device_id": "system", 00:11:18.681 "dma_device_type": 1 00:11:18.681 }, 00:11:18.681 { 00:11:18.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.681 "dma_device_type": 2 00:11:18.681 } 00:11:18.681 ], 00:11:18.681 "driver_specific": {} 00:11:18.681 } 00:11:18.681 ] 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.681 BaseBdev3 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.681 [ 00:11:18.681 { 00:11:18.681 "name": "BaseBdev3", 00:11:18.681 "aliases": [ 00:11:18.681 "4f76d0b2-2e8b-4d98-bb68-c413a63dc94d" 00:11:18.681 ], 00:11:18.681 "product_name": "Malloc disk", 00:11:18.681 "block_size": 512, 00:11:18.681 "num_blocks": 65536, 00:11:18.681 "uuid": "4f76d0b2-2e8b-4d98-bb68-c413a63dc94d", 00:11:18.681 "assigned_rate_limits": { 00:11:18.681 "rw_ios_per_sec": 0, 00:11:18.681 "rw_mbytes_per_sec": 0, 00:11:18.681 "r_mbytes_per_sec": 0, 00:11:18.681 "w_mbytes_per_sec": 0 00:11:18.681 }, 00:11:18.681 "claimed": false, 00:11:18.681 "zoned": false, 00:11:18.681 "supported_io_types": { 00:11:18.681 "read": true, 00:11:18.681 "write": true, 00:11:18.681 "unmap": true, 00:11:18.681 "flush": true, 00:11:18.681 "reset": true, 00:11:18.681 "nvme_admin": false, 00:11:18.681 "nvme_io": false, 00:11:18.681 "nvme_io_md": false, 00:11:18.681 "write_zeroes": true, 00:11:18.681 "zcopy": true, 00:11:18.681 "get_zone_info": false, 00:11:18.681 "zone_management": false, 00:11:18.681 "zone_append": false, 00:11:18.681 "compare": false, 00:11:18.681 "compare_and_write": false, 00:11:18.681 "abort": true, 00:11:18.681 "seek_hole": false, 00:11:18.681 "seek_data": false, 00:11:18.681 "copy": true, 00:11:18.681 "nvme_iov_md": false 00:11:18.681 }, 00:11:18.681 "memory_domains": [ 00:11:18.681 { 00:11:18.681 "dma_device_id": "system", 00:11:18.681 "dma_device_type": 1 00:11:18.681 }, 00:11:18.681 { 00:11:18.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.681 "dma_device_type": 2 00:11:18.681 } 00:11:18.681 ], 00:11:18.681 "driver_specific": {} 00:11:18.681 } 00:11:18.681 ] 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.681 BaseBdev4 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.681 [ 00:11:18.681 { 00:11:18.681 "name": "BaseBdev4", 00:11:18.681 "aliases": [ 00:11:18.681 "76f2c05f-60f1-4356-8be9-5a1051af6e12" 00:11:18.681 ], 00:11:18.681 "product_name": "Malloc disk", 00:11:18.681 "block_size": 512, 00:11:18.681 "num_blocks": 65536, 00:11:18.681 "uuid": "76f2c05f-60f1-4356-8be9-5a1051af6e12", 00:11:18.681 "assigned_rate_limits": { 00:11:18.681 "rw_ios_per_sec": 0, 00:11:18.681 "rw_mbytes_per_sec": 0, 00:11:18.681 "r_mbytes_per_sec": 0, 00:11:18.681 "w_mbytes_per_sec": 0 00:11:18.681 }, 00:11:18.681 "claimed": false, 00:11:18.681 "zoned": false, 00:11:18.681 "supported_io_types": { 00:11:18.681 "read": true, 00:11:18.681 "write": true, 00:11:18.681 "unmap": true, 00:11:18.681 "flush": true, 00:11:18.681 "reset": true, 00:11:18.681 "nvme_admin": false, 00:11:18.681 "nvme_io": false, 00:11:18.681 "nvme_io_md": false, 00:11:18.681 "write_zeroes": true, 00:11:18.681 "zcopy": true, 00:11:18.681 "get_zone_info": false, 00:11:18.681 "zone_management": false, 00:11:18.681 "zone_append": false, 00:11:18.681 "compare": false, 00:11:18.681 "compare_and_write": false, 00:11:18.681 "abort": true, 00:11:18.681 "seek_hole": false, 00:11:18.681 "seek_data": false, 00:11:18.681 "copy": true, 00:11:18.681 "nvme_iov_md": false 00:11:18.681 }, 00:11:18.681 "memory_domains": [ 00:11:18.681 { 00:11:18.681 "dma_device_id": "system", 00:11:18.681 "dma_device_type": 1 00:11:18.681 }, 00:11:18.681 { 00:11:18.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.681 "dma_device_type": 2 00:11:18.681 } 00:11:18.681 ], 00:11:18.681 "driver_specific": {} 00:11:18.681 } 00:11:18.681 ] 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.681 [2024-11-26 17:56:00.530184] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.681 [2024-11-26 17:56:00.530303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.681 [2024-11-26 17:56:00.530388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.681 [2024-11-26 17:56:00.532635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.681 [2024-11-26 17:56:00.532787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.681 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.682 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.682 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.682 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.682 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.682 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.682 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.682 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.682 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.682 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.941 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.941 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.941 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.941 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.941 "name": "Existed_Raid", 00:11:18.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.941 "strip_size_kb": 64, 00:11:18.941 "state": "configuring", 00:11:18.941 "raid_level": "concat", 00:11:18.941 "superblock": false, 00:11:18.941 "num_base_bdevs": 4, 00:11:18.941 "num_base_bdevs_discovered": 3, 00:11:18.941 "num_base_bdevs_operational": 4, 00:11:18.941 "base_bdevs_list": [ 00:11:18.941 { 00:11:18.941 "name": "BaseBdev1", 00:11:18.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.941 "is_configured": false, 00:11:18.941 "data_offset": 0, 00:11:18.941 "data_size": 0 00:11:18.941 }, 00:11:18.941 { 00:11:18.941 "name": "BaseBdev2", 00:11:18.941 "uuid": "19a678eb-6e97-4067-8f99-79694505f328", 00:11:18.941 "is_configured": true, 00:11:18.941 "data_offset": 0, 00:11:18.941 "data_size": 65536 00:11:18.941 }, 00:11:18.941 { 00:11:18.941 "name": "BaseBdev3", 00:11:18.941 "uuid": "4f76d0b2-2e8b-4d98-bb68-c413a63dc94d", 00:11:18.941 "is_configured": true, 00:11:18.941 "data_offset": 0, 00:11:18.941 "data_size": 65536 00:11:18.941 }, 00:11:18.941 { 00:11:18.941 "name": "BaseBdev4", 00:11:18.941 "uuid": "76f2c05f-60f1-4356-8be9-5a1051af6e12", 00:11:18.941 "is_configured": true, 00:11:18.941 "data_offset": 0, 00:11:18.941 "data_size": 65536 00:11:18.941 } 00:11:18.941 ] 00:11:18.941 }' 00:11:18.941 17:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.941 17:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.199 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:19.199 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.200 [2024-11-26 17:56:01.017394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.200 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.458 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.458 "name": "Existed_Raid", 00:11:19.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.458 "strip_size_kb": 64, 00:11:19.458 "state": "configuring", 00:11:19.458 "raid_level": "concat", 00:11:19.458 "superblock": false, 00:11:19.458 "num_base_bdevs": 4, 00:11:19.458 "num_base_bdevs_discovered": 2, 00:11:19.458 "num_base_bdevs_operational": 4, 00:11:19.458 "base_bdevs_list": [ 00:11:19.458 { 00:11:19.458 "name": "BaseBdev1", 00:11:19.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.458 "is_configured": false, 00:11:19.458 "data_offset": 0, 00:11:19.458 "data_size": 0 00:11:19.458 }, 00:11:19.458 { 00:11:19.458 "name": null, 00:11:19.458 "uuid": "19a678eb-6e97-4067-8f99-79694505f328", 00:11:19.458 "is_configured": false, 00:11:19.458 "data_offset": 0, 00:11:19.458 "data_size": 65536 00:11:19.458 }, 00:11:19.458 { 00:11:19.458 "name": "BaseBdev3", 00:11:19.458 "uuid": "4f76d0b2-2e8b-4d98-bb68-c413a63dc94d", 00:11:19.458 "is_configured": true, 00:11:19.458 "data_offset": 0, 00:11:19.458 "data_size": 65536 00:11:19.458 }, 00:11:19.458 { 00:11:19.458 "name": "BaseBdev4", 00:11:19.458 "uuid": "76f2c05f-60f1-4356-8be9-5a1051af6e12", 00:11:19.458 "is_configured": true, 00:11:19.458 "data_offset": 0, 00:11:19.458 "data_size": 65536 00:11:19.458 } 00:11:19.458 ] 00:11:19.458 }' 00:11:19.458 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.458 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.716 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.716 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:19.716 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.716 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.716 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.716 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:19.716 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:19.716 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.716 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.976 [2024-11-26 17:56:01.579621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.976 BaseBdev1 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.976 [ 00:11:19.976 { 00:11:19.976 "name": "BaseBdev1", 00:11:19.976 "aliases": [ 00:11:19.976 "d6b54cf6-180a-4910-a623-847cdf7dfdb2" 00:11:19.976 ], 00:11:19.976 "product_name": "Malloc disk", 00:11:19.976 "block_size": 512, 00:11:19.976 "num_blocks": 65536, 00:11:19.976 "uuid": "d6b54cf6-180a-4910-a623-847cdf7dfdb2", 00:11:19.976 "assigned_rate_limits": { 00:11:19.976 "rw_ios_per_sec": 0, 00:11:19.976 "rw_mbytes_per_sec": 0, 00:11:19.976 "r_mbytes_per_sec": 0, 00:11:19.976 "w_mbytes_per_sec": 0 00:11:19.976 }, 00:11:19.976 "claimed": true, 00:11:19.976 "claim_type": "exclusive_write", 00:11:19.976 "zoned": false, 00:11:19.976 "supported_io_types": { 00:11:19.976 "read": true, 00:11:19.976 "write": true, 00:11:19.976 "unmap": true, 00:11:19.976 "flush": true, 00:11:19.976 "reset": true, 00:11:19.976 "nvme_admin": false, 00:11:19.976 "nvme_io": false, 00:11:19.976 "nvme_io_md": false, 00:11:19.976 "write_zeroes": true, 00:11:19.976 "zcopy": true, 00:11:19.976 "get_zone_info": false, 00:11:19.976 "zone_management": false, 00:11:19.976 "zone_append": false, 00:11:19.976 "compare": false, 00:11:19.976 "compare_and_write": false, 00:11:19.976 "abort": true, 00:11:19.976 "seek_hole": false, 00:11:19.976 "seek_data": false, 00:11:19.976 "copy": true, 00:11:19.976 "nvme_iov_md": false 00:11:19.976 }, 00:11:19.976 "memory_domains": [ 00:11:19.976 { 00:11:19.976 "dma_device_id": "system", 00:11:19.976 "dma_device_type": 1 00:11:19.976 }, 00:11:19.976 { 00:11:19.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.976 "dma_device_type": 2 00:11:19.976 } 00:11:19.976 ], 00:11:19.976 "driver_specific": {} 00:11:19.976 } 00:11:19.976 ] 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.976 "name": "Existed_Raid", 00:11:19.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.976 "strip_size_kb": 64, 00:11:19.976 "state": "configuring", 00:11:19.976 "raid_level": "concat", 00:11:19.976 "superblock": false, 00:11:19.976 "num_base_bdevs": 4, 00:11:19.976 "num_base_bdevs_discovered": 3, 00:11:19.976 "num_base_bdevs_operational": 4, 00:11:19.976 "base_bdevs_list": [ 00:11:19.976 { 00:11:19.976 "name": "BaseBdev1", 00:11:19.976 "uuid": "d6b54cf6-180a-4910-a623-847cdf7dfdb2", 00:11:19.976 "is_configured": true, 00:11:19.976 "data_offset": 0, 00:11:19.976 "data_size": 65536 00:11:19.976 }, 00:11:19.976 { 00:11:19.976 "name": null, 00:11:19.976 "uuid": "19a678eb-6e97-4067-8f99-79694505f328", 00:11:19.976 "is_configured": false, 00:11:19.976 "data_offset": 0, 00:11:19.976 "data_size": 65536 00:11:19.976 }, 00:11:19.976 { 00:11:19.976 "name": "BaseBdev3", 00:11:19.976 "uuid": "4f76d0b2-2e8b-4d98-bb68-c413a63dc94d", 00:11:19.976 "is_configured": true, 00:11:19.976 "data_offset": 0, 00:11:19.976 "data_size": 65536 00:11:19.976 }, 00:11:19.976 { 00:11:19.976 "name": "BaseBdev4", 00:11:19.976 "uuid": "76f2c05f-60f1-4356-8be9-5a1051af6e12", 00:11:19.976 "is_configured": true, 00:11:19.976 "data_offset": 0, 00:11:19.976 "data_size": 65536 00:11:19.976 } 00:11:19.976 ] 00:11:19.976 }' 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.976 17:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.544 [2024-11-26 17:56:02.158815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.544 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.544 "name": "Existed_Raid", 00:11:20.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.544 "strip_size_kb": 64, 00:11:20.544 "state": "configuring", 00:11:20.544 "raid_level": "concat", 00:11:20.544 "superblock": false, 00:11:20.544 "num_base_bdevs": 4, 00:11:20.544 "num_base_bdevs_discovered": 2, 00:11:20.544 "num_base_bdevs_operational": 4, 00:11:20.544 "base_bdevs_list": [ 00:11:20.544 { 00:11:20.545 "name": "BaseBdev1", 00:11:20.545 "uuid": "d6b54cf6-180a-4910-a623-847cdf7dfdb2", 00:11:20.545 "is_configured": true, 00:11:20.545 "data_offset": 0, 00:11:20.545 "data_size": 65536 00:11:20.545 }, 00:11:20.545 { 00:11:20.545 "name": null, 00:11:20.545 "uuid": "19a678eb-6e97-4067-8f99-79694505f328", 00:11:20.545 "is_configured": false, 00:11:20.545 "data_offset": 0, 00:11:20.545 "data_size": 65536 00:11:20.545 }, 00:11:20.545 { 00:11:20.545 "name": null, 00:11:20.545 "uuid": "4f76d0b2-2e8b-4d98-bb68-c413a63dc94d", 00:11:20.545 "is_configured": false, 00:11:20.545 "data_offset": 0, 00:11:20.545 "data_size": 65536 00:11:20.545 }, 00:11:20.545 { 00:11:20.545 "name": "BaseBdev4", 00:11:20.545 "uuid": "76f2c05f-60f1-4356-8be9-5a1051af6e12", 00:11:20.545 "is_configured": true, 00:11:20.545 "data_offset": 0, 00:11:20.545 "data_size": 65536 00:11:20.545 } 00:11:20.545 ] 00:11:20.545 }' 00:11:20.545 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.545 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.805 [2024-11-26 17:56:02.658005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.805 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.064 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.064 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.065 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.065 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.065 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.065 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.065 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.065 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.065 "name": "Existed_Raid", 00:11:21.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.065 "strip_size_kb": 64, 00:11:21.065 "state": "configuring", 00:11:21.065 "raid_level": "concat", 00:11:21.065 "superblock": false, 00:11:21.065 "num_base_bdevs": 4, 00:11:21.065 "num_base_bdevs_discovered": 3, 00:11:21.065 "num_base_bdevs_operational": 4, 00:11:21.065 "base_bdevs_list": [ 00:11:21.065 { 00:11:21.065 "name": "BaseBdev1", 00:11:21.065 "uuid": "d6b54cf6-180a-4910-a623-847cdf7dfdb2", 00:11:21.065 "is_configured": true, 00:11:21.065 "data_offset": 0, 00:11:21.065 "data_size": 65536 00:11:21.065 }, 00:11:21.065 { 00:11:21.065 "name": null, 00:11:21.065 "uuid": "19a678eb-6e97-4067-8f99-79694505f328", 00:11:21.065 "is_configured": false, 00:11:21.065 "data_offset": 0, 00:11:21.065 "data_size": 65536 00:11:21.065 }, 00:11:21.065 { 00:11:21.065 "name": "BaseBdev3", 00:11:21.065 "uuid": "4f76d0b2-2e8b-4d98-bb68-c413a63dc94d", 00:11:21.065 "is_configured": true, 00:11:21.065 "data_offset": 0, 00:11:21.065 "data_size": 65536 00:11:21.065 }, 00:11:21.065 { 00:11:21.065 "name": "BaseBdev4", 00:11:21.065 "uuid": "76f2c05f-60f1-4356-8be9-5a1051af6e12", 00:11:21.065 "is_configured": true, 00:11:21.065 "data_offset": 0, 00:11:21.065 "data_size": 65536 00:11:21.065 } 00:11:21.065 ] 00:11:21.065 }' 00:11:21.065 17:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.065 17:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.326 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.326 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.326 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.326 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:21.326 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.326 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:21.326 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:21.326 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.326 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.326 [2024-11-26 17:56:03.157284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.585 "name": "Existed_Raid", 00:11:21.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.585 "strip_size_kb": 64, 00:11:21.585 "state": "configuring", 00:11:21.585 "raid_level": "concat", 00:11:21.585 "superblock": false, 00:11:21.585 "num_base_bdevs": 4, 00:11:21.585 "num_base_bdevs_discovered": 2, 00:11:21.585 "num_base_bdevs_operational": 4, 00:11:21.585 "base_bdevs_list": [ 00:11:21.585 { 00:11:21.585 "name": null, 00:11:21.585 "uuid": "d6b54cf6-180a-4910-a623-847cdf7dfdb2", 00:11:21.585 "is_configured": false, 00:11:21.585 "data_offset": 0, 00:11:21.585 "data_size": 65536 00:11:21.585 }, 00:11:21.585 { 00:11:21.585 "name": null, 00:11:21.585 "uuid": "19a678eb-6e97-4067-8f99-79694505f328", 00:11:21.585 "is_configured": false, 00:11:21.585 "data_offset": 0, 00:11:21.585 "data_size": 65536 00:11:21.585 }, 00:11:21.585 { 00:11:21.585 "name": "BaseBdev3", 00:11:21.585 "uuid": "4f76d0b2-2e8b-4d98-bb68-c413a63dc94d", 00:11:21.585 "is_configured": true, 00:11:21.585 "data_offset": 0, 00:11:21.585 "data_size": 65536 00:11:21.585 }, 00:11:21.585 { 00:11:21.585 "name": "BaseBdev4", 00:11:21.585 "uuid": "76f2c05f-60f1-4356-8be9-5a1051af6e12", 00:11:21.585 "is_configured": true, 00:11:21.585 "data_offset": 0, 00:11:21.585 "data_size": 65536 00:11:21.585 } 00:11:21.585 ] 00:11:21.585 }' 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.585 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.153 [2024-11-26 17:56:03.801781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.153 "name": "Existed_Raid", 00:11:22.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.153 "strip_size_kb": 64, 00:11:22.153 "state": "configuring", 00:11:22.153 "raid_level": "concat", 00:11:22.153 "superblock": false, 00:11:22.153 "num_base_bdevs": 4, 00:11:22.153 "num_base_bdevs_discovered": 3, 00:11:22.153 "num_base_bdevs_operational": 4, 00:11:22.153 "base_bdevs_list": [ 00:11:22.153 { 00:11:22.153 "name": null, 00:11:22.153 "uuid": "d6b54cf6-180a-4910-a623-847cdf7dfdb2", 00:11:22.153 "is_configured": false, 00:11:22.153 "data_offset": 0, 00:11:22.153 "data_size": 65536 00:11:22.153 }, 00:11:22.153 { 00:11:22.153 "name": "BaseBdev2", 00:11:22.153 "uuid": "19a678eb-6e97-4067-8f99-79694505f328", 00:11:22.153 "is_configured": true, 00:11:22.153 "data_offset": 0, 00:11:22.153 "data_size": 65536 00:11:22.153 }, 00:11:22.153 { 00:11:22.153 "name": "BaseBdev3", 00:11:22.153 "uuid": "4f76d0b2-2e8b-4d98-bb68-c413a63dc94d", 00:11:22.153 "is_configured": true, 00:11:22.153 "data_offset": 0, 00:11:22.153 "data_size": 65536 00:11:22.153 }, 00:11:22.153 { 00:11:22.153 "name": "BaseBdev4", 00:11:22.153 "uuid": "76f2c05f-60f1-4356-8be9-5a1051af6e12", 00:11:22.153 "is_configured": true, 00:11:22.153 "data_offset": 0, 00:11:22.153 "data_size": 65536 00:11:22.153 } 00:11:22.153 ] 00:11:22.153 }' 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.153 17:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.731 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.731 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:22.731 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.731 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.731 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.731 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d6b54cf6-180a-4910-a623-847cdf7dfdb2 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.732 [2024-11-26 17:56:04.421150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:22.732 [2024-11-26 17:56:04.421312] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:22.732 [2024-11-26 17:56:04.421356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:22.732 [2024-11-26 17:56:04.421740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:22.732 [2024-11-26 17:56:04.421983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:22.732 [2024-11-26 17:56:04.422054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:22.732 [2024-11-26 17:56:04.422405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.732 NewBaseBdev 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.732 [ 00:11:22.732 { 00:11:22.732 "name": "NewBaseBdev", 00:11:22.732 "aliases": [ 00:11:22.732 "d6b54cf6-180a-4910-a623-847cdf7dfdb2" 00:11:22.732 ], 00:11:22.732 "product_name": "Malloc disk", 00:11:22.732 "block_size": 512, 00:11:22.732 "num_blocks": 65536, 00:11:22.732 "uuid": "d6b54cf6-180a-4910-a623-847cdf7dfdb2", 00:11:22.732 "assigned_rate_limits": { 00:11:22.732 "rw_ios_per_sec": 0, 00:11:22.732 "rw_mbytes_per_sec": 0, 00:11:22.732 "r_mbytes_per_sec": 0, 00:11:22.732 "w_mbytes_per_sec": 0 00:11:22.732 }, 00:11:22.732 "claimed": true, 00:11:22.732 "claim_type": "exclusive_write", 00:11:22.732 "zoned": false, 00:11:22.732 "supported_io_types": { 00:11:22.732 "read": true, 00:11:22.732 "write": true, 00:11:22.732 "unmap": true, 00:11:22.732 "flush": true, 00:11:22.732 "reset": true, 00:11:22.732 "nvme_admin": false, 00:11:22.732 "nvme_io": false, 00:11:22.732 "nvme_io_md": false, 00:11:22.732 "write_zeroes": true, 00:11:22.732 "zcopy": true, 00:11:22.732 "get_zone_info": false, 00:11:22.732 "zone_management": false, 00:11:22.732 "zone_append": false, 00:11:22.732 "compare": false, 00:11:22.732 "compare_and_write": false, 00:11:22.732 "abort": true, 00:11:22.732 "seek_hole": false, 00:11:22.732 "seek_data": false, 00:11:22.732 "copy": true, 00:11:22.732 "nvme_iov_md": false 00:11:22.732 }, 00:11:22.732 "memory_domains": [ 00:11:22.732 { 00:11:22.732 "dma_device_id": "system", 00:11:22.732 "dma_device_type": 1 00:11:22.732 }, 00:11:22.732 { 00:11:22.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.732 "dma_device_type": 2 00:11:22.732 } 00:11:22.732 ], 00:11:22.732 "driver_specific": {} 00:11:22.732 } 00:11:22.732 ] 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.732 "name": "Existed_Raid", 00:11:22.732 "uuid": "6c7e3745-c15b-48a0-8156-8827d794f1d7", 00:11:22.732 "strip_size_kb": 64, 00:11:22.732 "state": "online", 00:11:22.732 "raid_level": "concat", 00:11:22.732 "superblock": false, 00:11:22.732 "num_base_bdevs": 4, 00:11:22.732 "num_base_bdevs_discovered": 4, 00:11:22.732 "num_base_bdevs_operational": 4, 00:11:22.732 "base_bdevs_list": [ 00:11:22.732 { 00:11:22.732 "name": "NewBaseBdev", 00:11:22.732 "uuid": "d6b54cf6-180a-4910-a623-847cdf7dfdb2", 00:11:22.732 "is_configured": true, 00:11:22.732 "data_offset": 0, 00:11:22.732 "data_size": 65536 00:11:22.732 }, 00:11:22.732 { 00:11:22.732 "name": "BaseBdev2", 00:11:22.732 "uuid": "19a678eb-6e97-4067-8f99-79694505f328", 00:11:22.732 "is_configured": true, 00:11:22.732 "data_offset": 0, 00:11:22.732 "data_size": 65536 00:11:22.732 }, 00:11:22.732 { 00:11:22.732 "name": "BaseBdev3", 00:11:22.732 "uuid": "4f76d0b2-2e8b-4d98-bb68-c413a63dc94d", 00:11:22.732 "is_configured": true, 00:11:22.732 "data_offset": 0, 00:11:22.732 "data_size": 65536 00:11:22.732 }, 00:11:22.732 { 00:11:22.732 "name": "BaseBdev4", 00:11:22.732 "uuid": "76f2c05f-60f1-4356-8be9-5a1051af6e12", 00:11:22.732 "is_configured": true, 00:11:22.732 "data_offset": 0, 00:11:22.732 "data_size": 65536 00:11:22.732 } 00:11:22.732 ] 00:11:22.732 }' 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.732 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.299 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:23.299 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:23.299 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:23.299 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:23.299 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:23.299 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:23.299 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:23.299 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.299 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.299 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:23.299 [2024-11-26 17:56:04.920880] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.299 17:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.299 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.299 "name": "Existed_Raid", 00:11:23.299 "aliases": [ 00:11:23.299 "6c7e3745-c15b-48a0-8156-8827d794f1d7" 00:11:23.299 ], 00:11:23.299 "product_name": "Raid Volume", 00:11:23.299 "block_size": 512, 00:11:23.299 "num_blocks": 262144, 00:11:23.299 "uuid": "6c7e3745-c15b-48a0-8156-8827d794f1d7", 00:11:23.299 "assigned_rate_limits": { 00:11:23.299 "rw_ios_per_sec": 0, 00:11:23.299 "rw_mbytes_per_sec": 0, 00:11:23.299 "r_mbytes_per_sec": 0, 00:11:23.299 "w_mbytes_per_sec": 0 00:11:23.299 }, 00:11:23.299 "claimed": false, 00:11:23.299 "zoned": false, 00:11:23.299 "supported_io_types": { 00:11:23.299 "read": true, 00:11:23.299 "write": true, 00:11:23.299 "unmap": true, 00:11:23.299 "flush": true, 00:11:23.299 "reset": true, 00:11:23.300 "nvme_admin": false, 00:11:23.300 "nvme_io": false, 00:11:23.300 "nvme_io_md": false, 00:11:23.300 "write_zeroes": true, 00:11:23.300 "zcopy": false, 00:11:23.300 "get_zone_info": false, 00:11:23.300 "zone_management": false, 00:11:23.300 "zone_append": false, 00:11:23.300 "compare": false, 00:11:23.300 "compare_and_write": false, 00:11:23.300 "abort": false, 00:11:23.300 "seek_hole": false, 00:11:23.300 "seek_data": false, 00:11:23.300 "copy": false, 00:11:23.300 "nvme_iov_md": false 00:11:23.300 }, 00:11:23.300 "memory_domains": [ 00:11:23.300 { 00:11:23.300 "dma_device_id": "system", 00:11:23.300 "dma_device_type": 1 00:11:23.300 }, 00:11:23.300 { 00:11:23.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.300 "dma_device_type": 2 00:11:23.300 }, 00:11:23.300 { 00:11:23.300 "dma_device_id": "system", 00:11:23.300 "dma_device_type": 1 00:11:23.300 }, 00:11:23.300 { 00:11:23.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.300 "dma_device_type": 2 00:11:23.300 }, 00:11:23.300 { 00:11:23.300 "dma_device_id": "system", 00:11:23.300 "dma_device_type": 1 00:11:23.300 }, 00:11:23.300 { 00:11:23.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.300 "dma_device_type": 2 00:11:23.300 }, 00:11:23.300 { 00:11:23.300 "dma_device_id": "system", 00:11:23.300 "dma_device_type": 1 00:11:23.300 }, 00:11:23.300 { 00:11:23.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.300 "dma_device_type": 2 00:11:23.300 } 00:11:23.300 ], 00:11:23.300 "driver_specific": { 00:11:23.300 "raid": { 00:11:23.300 "uuid": "6c7e3745-c15b-48a0-8156-8827d794f1d7", 00:11:23.300 "strip_size_kb": 64, 00:11:23.300 "state": "online", 00:11:23.300 "raid_level": "concat", 00:11:23.300 "superblock": false, 00:11:23.300 "num_base_bdevs": 4, 00:11:23.300 "num_base_bdevs_discovered": 4, 00:11:23.300 "num_base_bdevs_operational": 4, 00:11:23.300 "base_bdevs_list": [ 00:11:23.300 { 00:11:23.300 "name": "NewBaseBdev", 00:11:23.300 "uuid": "d6b54cf6-180a-4910-a623-847cdf7dfdb2", 00:11:23.300 "is_configured": true, 00:11:23.300 "data_offset": 0, 00:11:23.300 "data_size": 65536 00:11:23.300 }, 00:11:23.300 { 00:11:23.300 "name": "BaseBdev2", 00:11:23.300 "uuid": "19a678eb-6e97-4067-8f99-79694505f328", 00:11:23.300 "is_configured": true, 00:11:23.300 "data_offset": 0, 00:11:23.300 "data_size": 65536 00:11:23.300 }, 00:11:23.300 { 00:11:23.300 "name": "BaseBdev3", 00:11:23.300 "uuid": "4f76d0b2-2e8b-4d98-bb68-c413a63dc94d", 00:11:23.300 "is_configured": true, 00:11:23.300 "data_offset": 0, 00:11:23.300 "data_size": 65536 00:11:23.300 }, 00:11:23.300 { 00:11:23.300 "name": "BaseBdev4", 00:11:23.300 "uuid": "76f2c05f-60f1-4356-8be9-5a1051af6e12", 00:11:23.300 "is_configured": true, 00:11:23.300 "data_offset": 0, 00:11:23.300 "data_size": 65536 00:11:23.300 } 00:11:23.300 ] 00:11:23.300 } 00:11:23.300 } 00:11:23.300 }' 00:11:23.300 17:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:23.300 BaseBdev2 00:11:23.300 BaseBdev3 00:11:23.300 BaseBdev4' 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.300 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.559 [2024-11-26 17:56:05.283864] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.559 [2024-11-26 17:56:05.283953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.559 [2024-11-26 17:56:05.284095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.559 [2024-11-26 17:56:05.284213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.559 [2024-11-26 17:56:05.284267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71552 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71552 ']' 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71552 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71552 00:11:23.559 killing process with pid 71552 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71552' 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71552 00:11:23.559 [2024-11-26 17:56:05.322488] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.559 17:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71552 00:11:24.128 [2024-11-26 17:56:05.807587] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.506 ************************************ 00:11:25.506 END TEST raid_state_function_test 00:11:25.506 ************************************ 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:25.506 00:11:25.506 real 0m12.771s 00:11:25.506 user 0m20.096s 00:11:25.506 sys 0m2.225s 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.506 17:56:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:25.506 17:56:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:25.506 17:56:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.506 17:56:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.506 ************************************ 00:11:25.506 START TEST raid_state_function_test_sb 00:11:25.506 ************************************ 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72234 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72234' 00:11:25.506 Process raid pid: 72234 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72234 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72234 ']' 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.506 17:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.506 [2024-11-26 17:56:07.341481] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:11:25.506 [2024-11-26 17:56:07.341609] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.764 [2024-11-26 17:56:07.522557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.023 [2024-11-26 17:56:07.666464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.349 [2024-11-26 17:56:07.916473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.349 [2024-11-26 17:56:07.916542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.608 [2024-11-26 17:56:08.257292] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.608 [2024-11-26 17:56:08.257424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.608 [2024-11-26 17:56:08.257472] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.608 [2024-11-26 17:56:08.257503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.608 [2024-11-26 17:56:08.257543] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:26.608 [2024-11-26 17:56:08.257571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:26.608 [2024-11-26 17:56:08.257615] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:26.608 [2024-11-26 17:56:08.257643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.608 "name": "Existed_Raid", 00:11:26.608 "uuid": "b1fd46a7-d083-4f3e-805e-797c71f754dc", 00:11:26.608 "strip_size_kb": 64, 00:11:26.608 "state": "configuring", 00:11:26.608 "raid_level": "concat", 00:11:26.608 "superblock": true, 00:11:26.608 "num_base_bdevs": 4, 00:11:26.608 "num_base_bdevs_discovered": 0, 00:11:26.608 "num_base_bdevs_operational": 4, 00:11:26.608 "base_bdevs_list": [ 00:11:26.608 { 00:11:26.608 "name": "BaseBdev1", 00:11:26.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.608 "is_configured": false, 00:11:26.608 "data_offset": 0, 00:11:26.608 "data_size": 0 00:11:26.608 }, 00:11:26.608 { 00:11:26.608 "name": "BaseBdev2", 00:11:26.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.608 "is_configured": false, 00:11:26.608 "data_offset": 0, 00:11:26.608 "data_size": 0 00:11:26.608 }, 00:11:26.608 { 00:11:26.608 "name": "BaseBdev3", 00:11:26.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.608 "is_configured": false, 00:11:26.608 "data_offset": 0, 00:11:26.608 "data_size": 0 00:11:26.608 }, 00:11:26.608 { 00:11:26.608 "name": "BaseBdev4", 00:11:26.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.608 "is_configured": false, 00:11:26.608 "data_offset": 0, 00:11:26.608 "data_size": 0 00:11:26.608 } 00:11:26.608 ] 00:11:26.608 }' 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.608 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.178 [2024-11-26 17:56:08.769228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.178 [2024-11-26 17:56:08.769323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.178 [2024-11-26 17:56:08.781259] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.178 [2024-11-26 17:56:08.781360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.178 [2024-11-26 17:56:08.781401] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.178 [2024-11-26 17:56:08.781430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.178 [2024-11-26 17:56:08.781482] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.178 [2024-11-26 17:56:08.781510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.178 [2024-11-26 17:56:08.781553] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:27.178 [2024-11-26 17:56:08.781581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.178 BaseBdev1 00:11:27.178 [2024-11-26 17:56:08.836329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.178 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.178 [ 00:11:27.178 { 00:11:27.178 "name": "BaseBdev1", 00:11:27.178 "aliases": [ 00:11:27.178 "42b8ea8e-f959-403c-a9d2-ecdfe89cddf7" 00:11:27.178 ], 00:11:27.178 "product_name": "Malloc disk", 00:11:27.178 "block_size": 512, 00:11:27.178 "num_blocks": 65536, 00:11:27.178 "uuid": "42b8ea8e-f959-403c-a9d2-ecdfe89cddf7", 00:11:27.178 "assigned_rate_limits": { 00:11:27.178 "rw_ios_per_sec": 0, 00:11:27.178 "rw_mbytes_per_sec": 0, 00:11:27.178 "r_mbytes_per_sec": 0, 00:11:27.178 "w_mbytes_per_sec": 0 00:11:27.178 }, 00:11:27.178 "claimed": true, 00:11:27.178 "claim_type": "exclusive_write", 00:11:27.179 "zoned": false, 00:11:27.179 "supported_io_types": { 00:11:27.179 "read": true, 00:11:27.179 "write": true, 00:11:27.179 "unmap": true, 00:11:27.179 "flush": true, 00:11:27.179 "reset": true, 00:11:27.179 "nvme_admin": false, 00:11:27.179 "nvme_io": false, 00:11:27.179 "nvme_io_md": false, 00:11:27.179 "write_zeroes": true, 00:11:27.179 "zcopy": true, 00:11:27.179 "get_zone_info": false, 00:11:27.179 "zone_management": false, 00:11:27.179 "zone_append": false, 00:11:27.179 "compare": false, 00:11:27.179 "compare_and_write": false, 00:11:27.179 "abort": true, 00:11:27.179 "seek_hole": false, 00:11:27.179 "seek_data": false, 00:11:27.179 "copy": true, 00:11:27.179 "nvme_iov_md": false 00:11:27.179 }, 00:11:27.179 "memory_domains": [ 00:11:27.179 { 00:11:27.179 "dma_device_id": "system", 00:11:27.179 "dma_device_type": 1 00:11:27.179 }, 00:11:27.179 { 00:11:27.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.179 "dma_device_type": 2 00:11:27.179 } 00:11:27.179 ], 00:11:27.179 "driver_specific": {} 00:11:27.179 } 00:11:27.179 ] 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.179 "name": "Existed_Raid", 00:11:27.179 "uuid": "a5f90e3e-84a5-4b6a-8aae-e879c91806f5", 00:11:27.179 "strip_size_kb": 64, 00:11:27.179 "state": "configuring", 00:11:27.179 "raid_level": "concat", 00:11:27.179 "superblock": true, 00:11:27.179 "num_base_bdevs": 4, 00:11:27.179 "num_base_bdevs_discovered": 1, 00:11:27.179 "num_base_bdevs_operational": 4, 00:11:27.179 "base_bdevs_list": [ 00:11:27.179 { 00:11:27.179 "name": "BaseBdev1", 00:11:27.179 "uuid": "42b8ea8e-f959-403c-a9d2-ecdfe89cddf7", 00:11:27.179 "is_configured": true, 00:11:27.179 "data_offset": 2048, 00:11:27.179 "data_size": 63488 00:11:27.179 }, 00:11:27.179 { 00:11:27.179 "name": "BaseBdev2", 00:11:27.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.179 "is_configured": false, 00:11:27.179 "data_offset": 0, 00:11:27.179 "data_size": 0 00:11:27.179 }, 00:11:27.179 { 00:11:27.179 "name": "BaseBdev3", 00:11:27.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.179 "is_configured": false, 00:11:27.179 "data_offset": 0, 00:11:27.179 "data_size": 0 00:11:27.179 }, 00:11:27.179 { 00:11:27.179 "name": "BaseBdev4", 00:11:27.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.179 "is_configured": false, 00:11:27.179 "data_offset": 0, 00:11:27.179 "data_size": 0 00:11:27.179 } 00:11:27.179 ] 00:11:27.179 }' 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.179 17:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.447 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:27.447 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.447 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.447 [2024-11-26 17:56:09.303722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.447 [2024-11-26 17:56:09.303859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:27.447 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.712 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:27.712 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.712 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.712 [2024-11-26 17:56:09.315762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.712 [2024-11-26 17:56:09.317982] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.713 [2024-11-26 17:56:09.318108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.713 [2024-11-26 17:56:09.318161] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.713 [2024-11-26 17:56:09.318198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.713 [2024-11-26 17:56:09.318241] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:27.713 [2024-11-26 17:56:09.318273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.713 "name": "Existed_Raid", 00:11:27.713 "uuid": "386630d2-cdb6-48bf-a81f-ed260075811e", 00:11:27.713 "strip_size_kb": 64, 00:11:27.713 "state": "configuring", 00:11:27.713 "raid_level": "concat", 00:11:27.713 "superblock": true, 00:11:27.713 "num_base_bdevs": 4, 00:11:27.713 "num_base_bdevs_discovered": 1, 00:11:27.713 "num_base_bdevs_operational": 4, 00:11:27.713 "base_bdevs_list": [ 00:11:27.713 { 00:11:27.713 "name": "BaseBdev1", 00:11:27.713 "uuid": "42b8ea8e-f959-403c-a9d2-ecdfe89cddf7", 00:11:27.713 "is_configured": true, 00:11:27.713 "data_offset": 2048, 00:11:27.713 "data_size": 63488 00:11:27.713 }, 00:11:27.713 { 00:11:27.713 "name": "BaseBdev2", 00:11:27.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.713 "is_configured": false, 00:11:27.713 "data_offset": 0, 00:11:27.713 "data_size": 0 00:11:27.713 }, 00:11:27.713 { 00:11:27.713 "name": "BaseBdev3", 00:11:27.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.713 "is_configured": false, 00:11:27.713 "data_offset": 0, 00:11:27.713 "data_size": 0 00:11:27.713 }, 00:11:27.713 { 00:11:27.713 "name": "BaseBdev4", 00:11:27.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.713 "is_configured": false, 00:11:27.713 "data_offset": 0, 00:11:27.713 "data_size": 0 00:11:27.713 } 00:11:27.713 ] 00:11:27.713 }' 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.713 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.973 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.973 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.973 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.232 [2024-11-26 17:56:09.851204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.232 BaseBdev2 00:11:28.232 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.232 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:28.232 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:28.232 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.232 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:28.232 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.232 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.232 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.232 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.232 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.232 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.232 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:28.232 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.232 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.232 [ 00:11:28.232 { 00:11:28.232 "name": "BaseBdev2", 00:11:28.232 "aliases": [ 00:11:28.232 "a1c163bb-78d2-459f-9bb9-50f7c32b24f3" 00:11:28.232 ], 00:11:28.232 "product_name": "Malloc disk", 00:11:28.232 "block_size": 512, 00:11:28.232 "num_blocks": 65536, 00:11:28.233 "uuid": "a1c163bb-78d2-459f-9bb9-50f7c32b24f3", 00:11:28.233 "assigned_rate_limits": { 00:11:28.233 "rw_ios_per_sec": 0, 00:11:28.233 "rw_mbytes_per_sec": 0, 00:11:28.233 "r_mbytes_per_sec": 0, 00:11:28.233 "w_mbytes_per_sec": 0 00:11:28.233 }, 00:11:28.233 "claimed": true, 00:11:28.233 "claim_type": "exclusive_write", 00:11:28.233 "zoned": false, 00:11:28.233 "supported_io_types": { 00:11:28.233 "read": true, 00:11:28.233 "write": true, 00:11:28.233 "unmap": true, 00:11:28.233 "flush": true, 00:11:28.233 "reset": true, 00:11:28.233 "nvme_admin": false, 00:11:28.233 "nvme_io": false, 00:11:28.233 "nvme_io_md": false, 00:11:28.233 "write_zeroes": true, 00:11:28.233 "zcopy": true, 00:11:28.233 "get_zone_info": false, 00:11:28.233 "zone_management": false, 00:11:28.233 "zone_append": false, 00:11:28.233 "compare": false, 00:11:28.233 "compare_and_write": false, 00:11:28.233 "abort": true, 00:11:28.233 "seek_hole": false, 00:11:28.233 "seek_data": false, 00:11:28.233 "copy": true, 00:11:28.233 "nvme_iov_md": false 00:11:28.233 }, 00:11:28.233 "memory_domains": [ 00:11:28.233 { 00:11:28.233 "dma_device_id": "system", 00:11:28.233 "dma_device_type": 1 00:11:28.233 }, 00:11:28.233 { 00:11:28.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.233 "dma_device_type": 2 00:11:28.233 } 00:11:28.233 ], 00:11:28.233 "driver_specific": {} 00:11:28.233 } 00:11:28.233 ] 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.233 "name": "Existed_Raid", 00:11:28.233 "uuid": "386630d2-cdb6-48bf-a81f-ed260075811e", 00:11:28.233 "strip_size_kb": 64, 00:11:28.233 "state": "configuring", 00:11:28.233 "raid_level": "concat", 00:11:28.233 "superblock": true, 00:11:28.233 "num_base_bdevs": 4, 00:11:28.233 "num_base_bdevs_discovered": 2, 00:11:28.233 "num_base_bdevs_operational": 4, 00:11:28.233 "base_bdevs_list": [ 00:11:28.233 { 00:11:28.233 "name": "BaseBdev1", 00:11:28.233 "uuid": "42b8ea8e-f959-403c-a9d2-ecdfe89cddf7", 00:11:28.233 "is_configured": true, 00:11:28.233 "data_offset": 2048, 00:11:28.233 "data_size": 63488 00:11:28.233 }, 00:11:28.233 { 00:11:28.233 "name": "BaseBdev2", 00:11:28.233 "uuid": "a1c163bb-78d2-459f-9bb9-50f7c32b24f3", 00:11:28.233 "is_configured": true, 00:11:28.233 "data_offset": 2048, 00:11:28.233 "data_size": 63488 00:11:28.233 }, 00:11:28.233 { 00:11:28.233 "name": "BaseBdev3", 00:11:28.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.233 "is_configured": false, 00:11:28.233 "data_offset": 0, 00:11:28.233 "data_size": 0 00:11:28.233 }, 00:11:28.233 { 00:11:28.233 "name": "BaseBdev4", 00:11:28.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.233 "is_configured": false, 00:11:28.233 "data_offset": 0, 00:11:28.233 "data_size": 0 00:11:28.233 } 00:11:28.233 ] 00:11:28.233 }' 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.233 17:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.492 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:28.492 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.492 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.492 [2024-11-26 17:56:10.338942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.492 BaseBdev3 00:11:28.492 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.492 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:28.492 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:28.492 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.492 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:28.492 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.492 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.492 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.492 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.492 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.752 [ 00:11:28.752 { 00:11:28.752 "name": "BaseBdev3", 00:11:28.752 "aliases": [ 00:11:28.752 "51d12175-987b-4983-87c7-fb92ed6c269d" 00:11:28.752 ], 00:11:28.752 "product_name": "Malloc disk", 00:11:28.752 "block_size": 512, 00:11:28.752 "num_blocks": 65536, 00:11:28.752 "uuid": "51d12175-987b-4983-87c7-fb92ed6c269d", 00:11:28.752 "assigned_rate_limits": { 00:11:28.752 "rw_ios_per_sec": 0, 00:11:28.752 "rw_mbytes_per_sec": 0, 00:11:28.752 "r_mbytes_per_sec": 0, 00:11:28.752 "w_mbytes_per_sec": 0 00:11:28.752 }, 00:11:28.752 "claimed": true, 00:11:28.752 "claim_type": "exclusive_write", 00:11:28.752 "zoned": false, 00:11:28.752 "supported_io_types": { 00:11:28.752 "read": true, 00:11:28.752 "write": true, 00:11:28.752 "unmap": true, 00:11:28.752 "flush": true, 00:11:28.752 "reset": true, 00:11:28.752 "nvme_admin": false, 00:11:28.752 "nvme_io": false, 00:11:28.752 "nvme_io_md": false, 00:11:28.752 "write_zeroes": true, 00:11:28.752 "zcopy": true, 00:11:28.752 "get_zone_info": false, 00:11:28.752 "zone_management": false, 00:11:28.752 "zone_append": false, 00:11:28.752 "compare": false, 00:11:28.752 "compare_and_write": false, 00:11:28.752 "abort": true, 00:11:28.752 "seek_hole": false, 00:11:28.752 "seek_data": false, 00:11:28.752 "copy": true, 00:11:28.752 "nvme_iov_md": false 00:11:28.752 }, 00:11:28.752 "memory_domains": [ 00:11:28.752 { 00:11:28.752 "dma_device_id": "system", 00:11:28.752 "dma_device_type": 1 00:11:28.752 }, 00:11:28.752 { 00:11:28.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.752 "dma_device_type": 2 00:11:28.752 } 00:11:28.752 ], 00:11:28.752 "driver_specific": {} 00:11:28.752 } 00:11:28.752 ] 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.752 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.752 "name": "Existed_Raid", 00:11:28.752 "uuid": "386630d2-cdb6-48bf-a81f-ed260075811e", 00:11:28.752 "strip_size_kb": 64, 00:11:28.752 "state": "configuring", 00:11:28.752 "raid_level": "concat", 00:11:28.752 "superblock": true, 00:11:28.752 "num_base_bdevs": 4, 00:11:28.752 "num_base_bdevs_discovered": 3, 00:11:28.752 "num_base_bdevs_operational": 4, 00:11:28.753 "base_bdevs_list": [ 00:11:28.753 { 00:11:28.753 "name": "BaseBdev1", 00:11:28.753 "uuid": "42b8ea8e-f959-403c-a9d2-ecdfe89cddf7", 00:11:28.753 "is_configured": true, 00:11:28.753 "data_offset": 2048, 00:11:28.753 "data_size": 63488 00:11:28.753 }, 00:11:28.753 { 00:11:28.753 "name": "BaseBdev2", 00:11:28.753 "uuid": "a1c163bb-78d2-459f-9bb9-50f7c32b24f3", 00:11:28.753 "is_configured": true, 00:11:28.753 "data_offset": 2048, 00:11:28.753 "data_size": 63488 00:11:28.753 }, 00:11:28.753 { 00:11:28.753 "name": "BaseBdev3", 00:11:28.753 "uuid": "51d12175-987b-4983-87c7-fb92ed6c269d", 00:11:28.753 "is_configured": true, 00:11:28.753 "data_offset": 2048, 00:11:28.753 "data_size": 63488 00:11:28.753 }, 00:11:28.753 { 00:11:28.753 "name": "BaseBdev4", 00:11:28.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.753 "is_configured": false, 00:11:28.753 "data_offset": 0, 00:11:28.753 "data_size": 0 00:11:28.753 } 00:11:28.753 ] 00:11:28.753 }' 00:11:28.753 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.753 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.013 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:29.013 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.013 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.272 [2024-11-26 17:56:10.882470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:29.272 [2024-11-26 17:56:10.882844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:29.272 [2024-11-26 17:56:10.882906] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:29.272 [2024-11-26 17:56:10.883263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:29.272 BaseBdev4 00:11:29.272 [2024-11-26 17:56:10.883476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:29.272 [2024-11-26 17:56:10.883493] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:29.272 [2024-11-26 17:56:10.883668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.272 [ 00:11:29.272 { 00:11:29.272 "name": "BaseBdev4", 00:11:29.272 "aliases": [ 00:11:29.272 "259c0625-75fe-461f-a762-13ee570459fd" 00:11:29.272 ], 00:11:29.272 "product_name": "Malloc disk", 00:11:29.272 "block_size": 512, 00:11:29.272 "num_blocks": 65536, 00:11:29.272 "uuid": "259c0625-75fe-461f-a762-13ee570459fd", 00:11:29.272 "assigned_rate_limits": { 00:11:29.272 "rw_ios_per_sec": 0, 00:11:29.272 "rw_mbytes_per_sec": 0, 00:11:29.272 "r_mbytes_per_sec": 0, 00:11:29.272 "w_mbytes_per_sec": 0 00:11:29.272 }, 00:11:29.272 "claimed": true, 00:11:29.272 "claim_type": "exclusive_write", 00:11:29.272 "zoned": false, 00:11:29.272 "supported_io_types": { 00:11:29.272 "read": true, 00:11:29.272 "write": true, 00:11:29.272 "unmap": true, 00:11:29.272 "flush": true, 00:11:29.272 "reset": true, 00:11:29.272 "nvme_admin": false, 00:11:29.272 "nvme_io": false, 00:11:29.272 "nvme_io_md": false, 00:11:29.272 "write_zeroes": true, 00:11:29.272 "zcopy": true, 00:11:29.272 "get_zone_info": false, 00:11:29.272 "zone_management": false, 00:11:29.272 "zone_append": false, 00:11:29.272 "compare": false, 00:11:29.272 "compare_and_write": false, 00:11:29.272 "abort": true, 00:11:29.272 "seek_hole": false, 00:11:29.272 "seek_data": false, 00:11:29.272 "copy": true, 00:11:29.272 "nvme_iov_md": false 00:11:29.272 }, 00:11:29.272 "memory_domains": [ 00:11:29.272 { 00:11:29.272 "dma_device_id": "system", 00:11:29.272 "dma_device_type": 1 00:11:29.272 }, 00:11:29.272 { 00:11:29.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.272 "dma_device_type": 2 00:11:29.272 } 00:11:29.272 ], 00:11:29.272 "driver_specific": {} 00:11:29.272 } 00:11:29.272 ] 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.272 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.273 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.273 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.273 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.273 "name": "Existed_Raid", 00:11:29.273 "uuid": "386630d2-cdb6-48bf-a81f-ed260075811e", 00:11:29.273 "strip_size_kb": 64, 00:11:29.273 "state": "online", 00:11:29.273 "raid_level": "concat", 00:11:29.273 "superblock": true, 00:11:29.273 "num_base_bdevs": 4, 00:11:29.273 "num_base_bdevs_discovered": 4, 00:11:29.273 "num_base_bdevs_operational": 4, 00:11:29.273 "base_bdevs_list": [ 00:11:29.273 { 00:11:29.273 "name": "BaseBdev1", 00:11:29.273 "uuid": "42b8ea8e-f959-403c-a9d2-ecdfe89cddf7", 00:11:29.273 "is_configured": true, 00:11:29.273 "data_offset": 2048, 00:11:29.273 "data_size": 63488 00:11:29.273 }, 00:11:29.273 { 00:11:29.273 "name": "BaseBdev2", 00:11:29.273 "uuid": "a1c163bb-78d2-459f-9bb9-50f7c32b24f3", 00:11:29.273 "is_configured": true, 00:11:29.273 "data_offset": 2048, 00:11:29.273 "data_size": 63488 00:11:29.273 }, 00:11:29.273 { 00:11:29.273 "name": "BaseBdev3", 00:11:29.273 "uuid": "51d12175-987b-4983-87c7-fb92ed6c269d", 00:11:29.273 "is_configured": true, 00:11:29.273 "data_offset": 2048, 00:11:29.273 "data_size": 63488 00:11:29.273 }, 00:11:29.273 { 00:11:29.273 "name": "BaseBdev4", 00:11:29.273 "uuid": "259c0625-75fe-461f-a762-13ee570459fd", 00:11:29.273 "is_configured": true, 00:11:29.273 "data_offset": 2048, 00:11:29.273 "data_size": 63488 00:11:29.273 } 00:11:29.273 ] 00:11:29.273 }' 00:11:29.273 17:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.273 17:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.530 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:29.530 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:29.530 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.530 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.530 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.530 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.787 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:29.787 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.787 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.787 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.787 [2024-11-26 17:56:11.402256] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.787 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.787 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.787 "name": "Existed_Raid", 00:11:29.787 "aliases": [ 00:11:29.787 "386630d2-cdb6-48bf-a81f-ed260075811e" 00:11:29.787 ], 00:11:29.788 "product_name": "Raid Volume", 00:11:29.788 "block_size": 512, 00:11:29.788 "num_blocks": 253952, 00:11:29.788 "uuid": "386630d2-cdb6-48bf-a81f-ed260075811e", 00:11:29.788 "assigned_rate_limits": { 00:11:29.788 "rw_ios_per_sec": 0, 00:11:29.788 "rw_mbytes_per_sec": 0, 00:11:29.788 "r_mbytes_per_sec": 0, 00:11:29.788 "w_mbytes_per_sec": 0 00:11:29.788 }, 00:11:29.788 "claimed": false, 00:11:29.788 "zoned": false, 00:11:29.788 "supported_io_types": { 00:11:29.788 "read": true, 00:11:29.788 "write": true, 00:11:29.788 "unmap": true, 00:11:29.788 "flush": true, 00:11:29.788 "reset": true, 00:11:29.788 "nvme_admin": false, 00:11:29.788 "nvme_io": false, 00:11:29.788 "nvme_io_md": false, 00:11:29.788 "write_zeroes": true, 00:11:29.788 "zcopy": false, 00:11:29.788 "get_zone_info": false, 00:11:29.788 "zone_management": false, 00:11:29.788 "zone_append": false, 00:11:29.788 "compare": false, 00:11:29.788 "compare_and_write": false, 00:11:29.788 "abort": false, 00:11:29.788 "seek_hole": false, 00:11:29.788 "seek_data": false, 00:11:29.788 "copy": false, 00:11:29.788 "nvme_iov_md": false 00:11:29.788 }, 00:11:29.788 "memory_domains": [ 00:11:29.788 { 00:11:29.788 "dma_device_id": "system", 00:11:29.788 "dma_device_type": 1 00:11:29.788 }, 00:11:29.788 { 00:11:29.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.788 "dma_device_type": 2 00:11:29.788 }, 00:11:29.788 { 00:11:29.788 "dma_device_id": "system", 00:11:29.788 "dma_device_type": 1 00:11:29.788 }, 00:11:29.788 { 00:11:29.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.788 "dma_device_type": 2 00:11:29.788 }, 00:11:29.788 { 00:11:29.788 "dma_device_id": "system", 00:11:29.788 "dma_device_type": 1 00:11:29.788 }, 00:11:29.788 { 00:11:29.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.788 "dma_device_type": 2 00:11:29.788 }, 00:11:29.788 { 00:11:29.788 "dma_device_id": "system", 00:11:29.788 "dma_device_type": 1 00:11:29.788 }, 00:11:29.788 { 00:11:29.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.788 "dma_device_type": 2 00:11:29.788 } 00:11:29.788 ], 00:11:29.788 "driver_specific": { 00:11:29.788 "raid": { 00:11:29.788 "uuid": "386630d2-cdb6-48bf-a81f-ed260075811e", 00:11:29.788 "strip_size_kb": 64, 00:11:29.788 "state": "online", 00:11:29.788 "raid_level": "concat", 00:11:29.788 "superblock": true, 00:11:29.788 "num_base_bdevs": 4, 00:11:29.788 "num_base_bdevs_discovered": 4, 00:11:29.788 "num_base_bdevs_operational": 4, 00:11:29.788 "base_bdevs_list": [ 00:11:29.788 { 00:11:29.788 "name": "BaseBdev1", 00:11:29.788 "uuid": "42b8ea8e-f959-403c-a9d2-ecdfe89cddf7", 00:11:29.788 "is_configured": true, 00:11:29.788 "data_offset": 2048, 00:11:29.788 "data_size": 63488 00:11:29.788 }, 00:11:29.788 { 00:11:29.788 "name": "BaseBdev2", 00:11:29.788 "uuid": "a1c163bb-78d2-459f-9bb9-50f7c32b24f3", 00:11:29.788 "is_configured": true, 00:11:29.788 "data_offset": 2048, 00:11:29.788 "data_size": 63488 00:11:29.788 }, 00:11:29.788 { 00:11:29.788 "name": "BaseBdev3", 00:11:29.788 "uuid": "51d12175-987b-4983-87c7-fb92ed6c269d", 00:11:29.788 "is_configured": true, 00:11:29.788 "data_offset": 2048, 00:11:29.788 "data_size": 63488 00:11:29.788 }, 00:11:29.788 { 00:11:29.788 "name": "BaseBdev4", 00:11:29.788 "uuid": "259c0625-75fe-461f-a762-13ee570459fd", 00:11:29.788 "is_configured": true, 00:11:29.788 "data_offset": 2048, 00:11:29.788 "data_size": 63488 00:11:29.788 } 00:11:29.788 ] 00:11:29.788 } 00:11:29.788 } 00:11:29.788 }' 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:29.788 BaseBdev2 00:11:29.788 BaseBdev3 00:11:29.788 BaseBdev4' 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.788 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.046 [2024-11-26 17:56:11.705391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.046 [2024-11-26 17:56:11.705429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.046 [2024-11-26 17:56:11.705491] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.046 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.047 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.047 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.047 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.047 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.047 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.047 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.047 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.047 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.047 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.047 "name": "Existed_Raid", 00:11:30.047 "uuid": "386630d2-cdb6-48bf-a81f-ed260075811e", 00:11:30.047 "strip_size_kb": 64, 00:11:30.047 "state": "offline", 00:11:30.047 "raid_level": "concat", 00:11:30.047 "superblock": true, 00:11:30.047 "num_base_bdevs": 4, 00:11:30.047 "num_base_bdevs_discovered": 3, 00:11:30.047 "num_base_bdevs_operational": 3, 00:11:30.047 "base_bdevs_list": [ 00:11:30.047 { 00:11:30.047 "name": null, 00:11:30.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.047 "is_configured": false, 00:11:30.047 "data_offset": 0, 00:11:30.047 "data_size": 63488 00:11:30.047 }, 00:11:30.047 { 00:11:30.047 "name": "BaseBdev2", 00:11:30.047 "uuid": "a1c163bb-78d2-459f-9bb9-50f7c32b24f3", 00:11:30.047 "is_configured": true, 00:11:30.047 "data_offset": 2048, 00:11:30.047 "data_size": 63488 00:11:30.047 }, 00:11:30.047 { 00:11:30.047 "name": "BaseBdev3", 00:11:30.047 "uuid": "51d12175-987b-4983-87c7-fb92ed6c269d", 00:11:30.047 "is_configured": true, 00:11:30.047 "data_offset": 2048, 00:11:30.047 "data_size": 63488 00:11:30.047 }, 00:11:30.047 { 00:11:30.047 "name": "BaseBdev4", 00:11:30.047 "uuid": "259c0625-75fe-461f-a762-13ee570459fd", 00:11:30.047 "is_configured": true, 00:11:30.047 "data_offset": 2048, 00:11:30.047 "data_size": 63488 00:11:30.047 } 00:11:30.047 ] 00:11:30.047 }' 00:11:30.047 17:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.047 17:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.612 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:30.612 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.612 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.612 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:30.612 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.612 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.612 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.612 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:30.612 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:30.613 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:30.613 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.613 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.613 [2024-11-26 17:56:12.325301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:30.613 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.613 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:30.613 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.613 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.613 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.613 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:30.613 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.613 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.871 [2024-11-26 17:56:12.494598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.871 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.871 [2024-11-26 17:56:12.668353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:30.871 [2024-11-26 17:56:12.668415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.128 BaseBdev2 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.128 [ 00:11:31.128 { 00:11:31.128 "name": "BaseBdev2", 00:11:31.128 "aliases": [ 00:11:31.128 "0cb30a6d-4b88-4248-9c65-4937063bf69f" 00:11:31.128 ], 00:11:31.128 "product_name": "Malloc disk", 00:11:31.128 "block_size": 512, 00:11:31.128 "num_blocks": 65536, 00:11:31.128 "uuid": "0cb30a6d-4b88-4248-9c65-4937063bf69f", 00:11:31.128 "assigned_rate_limits": { 00:11:31.128 "rw_ios_per_sec": 0, 00:11:31.128 "rw_mbytes_per_sec": 0, 00:11:31.128 "r_mbytes_per_sec": 0, 00:11:31.128 "w_mbytes_per_sec": 0 00:11:31.128 }, 00:11:31.128 "claimed": false, 00:11:31.128 "zoned": false, 00:11:31.128 "supported_io_types": { 00:11:31.128 "read": true, 00:11:31.128 "write": true, 00:11:31.128 "unmap": true, 00:11:31.128 "flush": true, 00:11:31.128 "reset": true, 00:11:31.128 "nvme_admin": false, 00:11:31.128 "nvme_io": false, 00:11:31.128 "nvme_io_md": false, 00:11:31.128 "write_zeroes": true, 00:11:31.128 "zcopy": true, 00:11:31.128 "get_zone_info": false, 00:11:31.128 "zone_management": false, 00:11:31.128 "zone_append": false, 00:11:31.128 "compare": false, 00:11:31.128 "compare_and_write": false, 00:11:31.128 "abort": true, 00:11:31.128 "seek_hole": false, 00:11:31.128 "seek_data": false, 00:11:31.128 "copy": true, 00:11:31.128 "nvme_iov_md": false 00:11:31.128 }, 00:11:31.128 "memory_domains": [ 00:11:31.128 { 00:11:31.128 "dma_device_id": "system", 00:11:31.128 "dma_device_type": 1 00:11:31.128 }, 00:11:31.128 { 00:11:31.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.128 "dma_device_type": 2 00:11:31.128 } 00:11:31.128 ], 00:11:31.128 "driver_specific": {} 00:11:31.128 } 00:11:31.128 ] 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.128 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.129 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:31.129 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.129 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.129 BaseBdev3 00:11:31.129 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.129 17:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:31.129 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:31.129 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.386 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.386 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.386 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.386 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.386 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.386 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.386 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.386 17:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.386 [ 00:11:31.386 { 00:11:31.386 "name": "BaseBdev3", 00:11:31.386 "aliases": [ 00:11:31.386 "49e06e5d-7639-479a-b54c-1dc3117aeb6f" 00:11:31.386 ], 00:11:31.386 "product_name": "Malloc disk", 00:11:31.386 "block_size": 512, 00:11:31.386 "num_blocks": 65536, 00:11:31.386 "uuid": "49e06e5d-7639-479a-b54c-1dc3117aeb6f", 00:11:31.386 "assigned_rate_limits": { 00:11:31.386 "rw_ios_per_sec": 0, 00:11:31.386 "rw_mbytes_per_sec": 0, 00:11:31.386 "r_mbytes_per_sec": 0, 00:11:31.386 "w_mbytes_per_sec": 0 00:11:31.386 }, 00:11:31.386 "claimed": false, 00:11:31.386 "zoned": false, 00:11:31.386 "supported_io_types": { 00:11:31.386 "read": true, 00:11:31.386 "write": true, 00:11:31.386 "unmap": true, 00:11:31.386 "flush": true, 00:11:31.386 "reset": true, 00:11:31.386 "nvme_admin": false, 00:11:31.386 "nvme_io": false, 00:11:31.386 "nvme_io_md": false, 00:11:31.386 "write_zeroes": true, 00:11:31.386 "zcopy": true, 00:11:31.386 "get_zone_info": false, 00:11:31.386 "zone_management": false, 00:11:31.386 "zone_append": false, 00:11:31.386 "compare": false, 00:11:31.386 "compare_and_write": false, 00:11:31.386 "abort": true, 00:11:31.386 "seek_hole": false, 00:11:31.386 "seek_data": false, 00:11:31.386 "copy": true, 00:11:31.386 "nvme_iov_md": false 00:11:31.386 }, 00:11:31.386 "memory_domains": [ 00:11:31.386 { 00:11:31.386 "dma_device_id": "system", 00:11:31.386 "dma_device_type": 1 00:11:31.386 }, 00:11:31.386 { 00:11:31.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.386 "dma_device_type": 2 00:11:31.386 } 00:11:31.386 ], 00:11:31.386 "driver_specific": {} 00:11:31.386 } 00:11:31.386 ] 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.386 BaseBdev4 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.386 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.386 [ 00:11:31.386 { 00:11:31.386 "name": "BaseBdev4", 00:11:31.386 "aliases": [ 00:11:31.386 "e19624e7-6643-49df-a53f-c50a2b9350f6" 00:11:31.386 ], 00:11:31.386 "product_name": "Malloc disk", 00:11:31.386 "block_size": 512, 00:11:31.387 "num_blocks": 65536, 00:11:31.387 "uuid": "e19624e7-6643-49df-a53f-c50a2b9350f6", 00:11:31.387 "assigned_rate_limits": { 00:11:31.387 "rw_ios_per_sec": 0, 00:11:31.387 "rw_mbytes_per_sec": 0, 00:11:31.387 "r_mbytes_per_sec": 0, 00:11:31.387 "w_mbytes_per_sec": 0 00:11:31.387 }, 00:11:31.387 "claimed": false, 00:11:31.387 "zoned": false, 00:11:31.387 "supported_io_types": { 00:11:31.387 "read": true, 00:11:31.387 "write": true, 00:11:31.387 "unmap": true, 00:11:31.387 "flush": true, 00:11:31.387 "reset": true, 00:11:31.387 "nvme_admin": false, 00:11:31.387 "nvme_io": false, 00:11:31.387 "nvme_io_md": false, 00:11:31.387 "write_zeroes": true, 00:11:31.387 "zcopy": true, 00:11:31.387 "get_zone_info": false, 00:11:31.387 "zone_management": false, 00:11:31.387 "zone_append": false, 00:11:31.387 "compare": false, 00:11:31.387 "compare_and_write": false, 00:11:31.387 "abort": true, 00:11:31.387 "seek_hole": false, 00:11:31.387 "seek_data": false, 00:11:31.387 "copy": true, 00:11:31.387 "nvme_iov_md": false 00:11:31.387 }, 00:11:31.387 "memory_domains": [ 00:11:31.387 { 00:11:31.387 "dma_device_id": "system", 00:11:31.387 "dma_device_type": 1 00:11:31.387 }, 00:11:31.387 { 00:11:31.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.387 "dma_device_type": 2 00:11:31.387 } 00:11:31.387 ], 00:11:31.387 "driver_specific": {} 00:11:31.387 } 00:11:31.387 ] 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.387 [2024-11-26 17:56:13.122495] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.387 [2024-11-26 17:56:13.122547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.387 [2024-11-26 17:56:13.122575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.387 [2024-11-26 17:56:13.124749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.387 [2024-11-26 17:56:13.124818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.387 "name": "Existed_Raid", 00:11:31.387 "uuid": "52a65c2e-b4ea-400a-979e-a083bae5c156", 00:11:31.387 "strip_size_kb": 64, 00:11:31.387 "state": "configuring", 00:11:31.387 "raid_level": "concat", 00:11:31.387 "superblock": true, 00:11:31.387 "num_base_bdevs": 4, 00:11:31.387 "num_base_bdevs_discovered": 3, 00:11:31.387 "num_base_bdevs_operational": 4, 00:11:31.387 "base_bdevs_list": [ 00:11:31.387 { 00:11:31.387 "name": "BaseBdev1", 00:11:31.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.387 "is_configured": false, 00:11:31.387 "data_offset": 0, 00:11:31.387 "data_size": 0 00:11:31.387 }, 00:11:31.387 { 00:11:31.387 "name": "BaseBdev2", 00:11:31.387 "uuid": "0cb30a6d-4b88-4248-9c65-4937063bf69f", 00:11:31.387 "is_configured": true, 00:11:31.387 "data_offset": 2048, 00:11:31.387 "data_size": 63488 00:11:31.387 }, 00:11:31.387 { 00:11:31.387 "name": "BaseBdev3", 00:11:31.387 "uuid": "49e06e5d-7639-479a-b54c-1dc3117aeb6f", 00:11:31.387 "is_configured": true, 00:11:31.387 "data_offset": 2048, 00:11:31.387 "data_size": 63488 00:11:31.387 }, 00:11:31.387 { 00:11:31.387 "name": "BaseBdev4", 00:11:31.387 "uuid": "e19624e7-6643-49df-a53f-c50a2b9350f6", 00:11:31.387 "is_configured": true, 00:11:31.387 "data_offset": 2048, 00:11:31.387 "data_size": 63488 00:11:31.387 } 00:11:31.387 ] 00:11:31.387 }' 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.387 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.953 [2024-11-26 17:56:13.566216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.953 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.953 "name": "Existed_Raid", 00:11:31.954 "uuid": "52a65c2e-b4ea-400a-979e-a083bae5c156", 00:11:31.954 "strip_size_kb": 64, 00:11:31.954 "state": "configuring", 00:11:31.954 "raid_level": "concat", 00:11:31.954 "superblock": true, 00:11:31.954 "num_base_bdevs": 4, 00:11:31.954 "num_base_bdevs_discovered": 2, 00:11:31.954 "num_base_bdevs_operational": 4, 00:11:31.954 "base_bdevs_list": [ 00:11:31.954 { 00:11:31.954 "name": "BaseBdev1", 00:11:31.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.954 "is_configured": false, 00:11:31.954 "data_offset": 0, 00:11:31.954 "data_size": 0 00:11:31.954 }, 00:11:31.954 { 00:11:31.954 "name": null, 00:11:31.954 "uuid": "0cb30a6d-4b88-4248-9c65-4937063bf69f", 00:11:31.954 "is_configured": false, 00:11:31.954 "data_offset": 0, 00:11:31.954 "data_size": 63488 00:11:31.954 }, 00:11:31.954 { 00:11:31.954 "name": "BaseBdev3", 00:11:31.954 "uuid": "49e06e5d-7639-479a-b54c-1dc3117aeb6f", 00:11:31.954 "is_configured": true, 00:11:31.954 "data_offset": 2048, 00:11:31.954 "data_size": 63488 00:11:31.954 }, 00:11:31.954 { 00:11:31.954 "name": "BaseBdev4", 00:11:31.954 "uuid": "e19624e7-6643-49df-a53f-c50a2b9350f6", 00:11:31.954 "is_configured": true, 00:11:31.954 "data_offset": 2048, 00:11:31.954 "data_size": 63488 00:11:31.954 } 00:11:31.954 ] 00:11:31.954 }' 00:11:31.954 17:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.954 17:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.212 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.212 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.212 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.212 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.519 [2024-11-26 17:56:14.134499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.519 BaseBdev1 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.519 [ 00:11:32.519 { 00:11:32.519 "name": "BaseBdev1", 00:11:32.519 "aliases": [ 00:11:32.519 "b2a4ac4a-9e42-4f3d-877a-9b51165f7b11" 00:11:32.519 ], 00:11:32.519 "product_name": "Malloc disk", 00:11:32.519 "block_size": 512, 00:11:32.519 "num_blocks": 65536, 00:11:32.519 "uuid": "b2a4ac4a-9e42-4f3d-877a-9b51165f7b11", 00:11:32.519 "assigned_rate_limits": { 00:11:32.519 "rw_ios_per_sec": 0, 00:11:32.519 "rw_mbytes_per_sec": 0, 00:11:32.519 "r_mbytes_per_sec": 0, 00:11:32.519 "w_mbytes_per_sec": 0 00:11:32.519 }, 00:11:32.519 "claimed": true, 00:11:32.519 "claim_type": "exclusive_write", 00:11:32.519 "zoned": false, 00:11:32.519 "supported_io_types": { 00:11:32.519 "read": true, 00:11:32.519 "write": true, 00:11:32.519 "unmap": true, 00:11:32.519 "flush": true, 00:11:32.519 "reset": true, 00:11:32.519 "nvme_admin": false, 00:11:32.519 "nvme_io": false, 00:11:32.519 "nvme_io_md": false, 00:11:32.519 "write_zeroes": true, 00:11:32.519 "zcopy": true, 00:11:32.519 "get_zone_info": false, 00:11:32.519 "zone_management": false, 00:11:32.519 "zone_append": false, 00:11:32.519 "compare": false, 00:11:32.519 "compare_and_write": false, 00:11:32.519 "abort": true, 00:11:32.519 "seek_hole": false, 00:11:32.519 "seek_data": false, 00:11:32.519 "copy": true, 00:11:32.519 "nvme_iov_md": false 00:11:32.519 }, 00:11:32.519 "memory_domains": [ 00:11:32.519 { 00:11:32.519 "dma_device_id": "system", 00:11:32.519 "dma_device_type": 1 00:11:32.519 }, 00:11:32.519 { 00:11:32.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.519 "dma_device_type": 2 00:11:32.519 } 00:11:32.519 ], 00:11:32.519 "driver_specific": {} 00:11:32.519 } 00:11:32.519 ] 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.519 "name": "Existed_Raid", 00:11:32.519 "uuid": "52a65c2e-b4ea-400a-979e-a083bae5c156", 00:11:32.519 "strip_size_kb": 64, 00:11:32.519 "state": "configuring", 00:11:32.519 "raid_level": "concat", 00:11:32.519 "superblock": true, 00:11:32.519 "num_base_bdevs": 4, 00:11:32.519 "num_base_bdevs_discovered": 3, 00:11:32.519 "num_base_bdevs_operational": 4, 00:11:32.519 "base_bdevs_list": [ 00:11:32.519 { 00:11:32.519 "name": "BaseBdev1", 00:11:32.519 "uuid": "b2a4ac4a-9e42-4f3d-877a-9b51165f7b11", 00:11:32.519 "is_configured": true, 00:11:32.519 "data_offset": 2048, 00:11:32.519 "data_size": 63488 00:11:32.519 }, 00:11:32.519 { 00:11:32.519 "name": null, 00:11:32.519 "uuid": "0cb30a6d-4b88-4248-9c65-4937063bf69f", 00:11:32.519 "is_configured": false, 00:11:32.519 "data_offset": 0, 00:11:32.519 "data_size": 63488 00:11:32.519 }, 00:11:32.519 { 00:11:32.519 "name": "BaseBdev3", 00:11:32.519 "uuid": "49e06e5d-7639-479a-b54c-1dc3117aeb6f", 00:11:32.519 "is_configured": true, 00:11:32.519 "data_offset": 2048, 00:11:32.519 "data_size": 63488 00:11:32.519 }, 00:11:32.519 { 00:11:32.519 "name": "BaseBdev4", 00:11:32.519 "uuid": "e19624e7-6643-49df-a53f-c50a2b9350f6", 00:11:32.519 "is_configured": true, 00:11:32.519 "data_offset": 2048, 00:11:32.519 "data_size": 63488 00:11:32.519 } 00:11:32.519 ] 00:11:32.519 }' 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.519 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.806 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:32.806 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.806 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.806 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.806 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.064 [2024-11-26 17:56:14.689693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.064 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.064 "name": "Existed_Raid", 00:11:33.064 "uuid": "52a65c2e-b4ea-400a-979e-a083bae5c156", 00:11:33.064 "strip_size_kb": 64, 00:11:33.064 "state": "configuring", 00:11:33.064 "raid_level": "concat", 00:11:33.064 "superblock": true, 00:11:33.064 "num_base_bdevs": 4, 00:11:33.064 "num_base_bdevs_discovered": 2, 00:11:33.064 "num_base_bdevs_operational": 4, 00:11:33.064 "base_bdevs_list": [ 00:11:33.064 { 00:11:33.064 "name": "BaseBdev1", 00:11:33.064 "uuid": "b2a4ac4a-9e42-4f3d-877a-9b51165f7b11", 00:11:33.064 "is_configured": true, 00:11:33.064 "data_offset": 2048, 00:11:33.064 "data_size": 63488 00:11:33.064 }, 00:11:33.064 { 00:11:33.064 "name": null, 00:11:33.064 "uuid": "0cb30a6d-4b88-4248-9c65-4937063bf69f", 00:11:33.065 "is_configured": false, 00:11:33.065 "data_offset": 0, 00:11:33.065 "data_size": 63488 00:11:33.065 }, 00:11:33.065 { 00:11:33.065 "name": null, 00:11:33.065 "uuid": "49e06e5d-7639-479a-b54c-1dc3117aeb6f", 00:11:33.065 "is_configured": false, 00:11:33.065 "data_offset": 0, 00:11:33.065 "data_size": 63488 00:11:33.065 }, 00:11:33.065 { 00:11:33.065 "name": "BaseBdev4", 00:11:33.065 "uuid": "e19624e7-6643-49df-a53f-c50a2b9350f6", 00:11:33.065 "is_configured": true, 00:11:33.065 "data_offset": 2048, 00:11:33.065 "data_size": 63488 00:11:33.065 } 00:11:33.065 ] 00:11:33.065 }' 00:11:33.065 17:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.065 17:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.322 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.322 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:33.322 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.322 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.580 [2024-11-26 17:56:15.217236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.580 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.581 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.581 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.581 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.581 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.581 "name": "Existed_Raid", 00:11:33.581 "uuid": "52a65c2e-b4ea-400a-979e-a083bae5c156", 00:11:33.581 "strip_size_kb": 64, 00:11:33.581 "state": "configuring", 00:11:33.581 "raid_level": "concat", 00:11:33.581 "superblock": true, 00:11:33.581 "num_base_bdevs": 4, 00:11:33.581 "num_base_bdevs_discovered": 3, 00:11:33.581 "num_base_bdevs_operational": 4, 00:11:33.581 "base_bdevs_list": [ 00:11:33.581 { 00:11:33.581 "name": "BaseBdev1", 00:11:33.581 "uuid": "b2a4ac4a-9e42-4f3d-877a-9b51165f7b11", 00:11:33.581 "is_configured": true, 00:11:33.581 "data_offset": 2048, 00:11:33.581 "data_size": 63488 00:11:33.581 }, 00:11:33.581 { 00:11:33.581 "name": null, 00:11:33.581 "uuid": "0cb30a6d-4b88-4248-9c65-4937063bf69f", 00:11:33.581 "is_configured": false, 00:11:33.581 "data_offset": 0, 00:11:33.581 "data_size": 63488 00:11:33.581 }, 00:11:33.581 { 00:11:33.581 "name": "BaseBdev3", 00:11:33.581 "uuid": "49e06e5d-7639-479a-b54c-1dc3117aeb6f", 00:11:33.581 "is_configured": true, 00:11:33.581 "data_offset": 2048, 00:11:33.581 "data_size": 63488 00:11:33.581 }, 00:11:33.581 { 00:11:33.581 "name": "BaseBdev4", 00:11:33.581 "uuid": "e19624e7-6643-49df-a53f-c50a2b9350f6", 00:11:33.581 "is_configured": true, 00:11:33.581 "data_offset": 2048, 00:11:33.581 "data_size": 63488 00:11:33.581 } 00:11:33.581 ] 00:11:33.581 }' 00:11:33.581 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.581 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.837 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.837 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.837 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.837 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:33.837 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.096 [2024-11-26 17:56:15.721284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.096 "name": "Existed_Raid", 00:11:34.096 "uuid": "52a65c2e-b4ea-400a-979e-a083bae5c156", 00:11:34.096 "strip_size_kb": 64, 00:11:34.096 "state": "configuring", 00:11:34.096 "raid_level": "concat", 00:11:34.096 "superblock": true, 00:11:34.096 "num_base_bdevs": 4, 00:11:34.096 "num_base_bdevs_discovered": 2, 00:11:34.096 "num_base_bdevs_operational": 4, 00:11:34.096 "base_bdevs_list": [ 00:11:34.096 { 00:11:34.096 "name": null, 00:11:34.096 "uuid": "b2a4ac4a-9e42-4f3d-877a-9b51165f7b11", 00:11:34.096 "is_configured": false, 00:11:34.096 "data_offset": 0, 00:11:34.096 "data_size": 63488 00:11:34.096 }, 00:11:34.096 { 00:11:34.096 "name": null, 00:11:34.096 "uuid": "0cb30a6d-4b88-4248-9c65-4937063bf69f", 00:11:34.096 "is_configured": false, 00:11:34.096 "data_offset": 0, 00:11:34.096 "data_size": 63488 00:11:34.096 }, 00:11:34.096 { 00:11:34.096 "name": "BaseBdev3", 00:11:34.096 "uuid": "49e06e5d-7639-479a-b54c-1dc3117aeb6f", 00:11:34.096 "is_configured": true, 00:11:34.096 "data_offset": 2048, 00:11:34.096 "data_size": 63488 00:11:34.096 }, 00:11:34.096 { 00:11:34.096 "name": "BaseBdev4", 00:11:34.096 "uuid": "e19624e7-6643-49df-a53f-c50a2b9350f6", 00:11:34.096 "is_configured": true, 00:11:34.096 "data_offset": 2048, 00:11:34.096 "data_size": 63488 00:11:34.096 } 00:11:34.096 ] 00:11:34.096 }' 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.096 17:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.661 [2024-11-26 17:56:16.369261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.661 "name": "Existed_Raid", 00:11:34.661 "uuid": "52a65c2e-b4ea-400a-979e-a083bae5c156", 00:11:34.661 "strip_size_kb": 64, 00:11:34.661 "state": "configuring", 00:11:34.661 "raid_level": "concat", 00:11:34.661 "superblock": true, 00:11:34.661 "num_base_bdevs": 4, 00:11:34.661 "num_base_bdevs_discovered": 3, 00:11:34.661 "num_base_bdevs_operational": 4, 00:11:34.661 "base_bdevs_list": [ 00:11:34.661 { 00:11:34.661 "name": null, 00:11:34.661 "uuid": "b2a4ac4a-9e42-4f3d-877a-9b51165f7b11", 00:11:34.661 "is_configured": false, 00:11:34.661 "data_offset": 0, 00:11:34.661 "data_size": 63488 00:11:34.661 }, 00:11:34.661 { 00:11:34.661 "name": "BaseBdev2", 00:11:34.661 "uuid": "0cb30a6d-4b88-4248-9c65-4937063bf69f", 00:11:34.661 "is_configured": true, 00:11:34.661 "data_offset": 2048, 00:11:34.661 "data_size": 63488 00:11:34.661 }, 00:11:34.661 { 00:11:34.661 "name": "BaseBdev3", 00:11:34.661 "uuid": "49e06e5d-7639-479a-b54c-1dc3117aeb6f", 00:11:34.661 "is_configured": true, 00:11:34.661 "data_offset": 2048, 00:11:34.661 "data_size": 63488 00:11:34.661 }, 00:11:34.661 { 00:11:34.661 "name": "BaseBdev4", 00:11:34.661 "uuid": "e19624e7-6643-49df-a53f-c50a2b9350f6", 00:11:34.661 "is_configured": true, 00:11:34.661 "data_offset": 2048, 00:11:34.661 "data_size": 63488 00:11:34.661 } 00:11:34.661 ] 00:11:34.661 }' 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.661 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b2a4ac4a-9e42-4f3d-877a-9b51165f7b11 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.226 [2024-11-26 17:56:16.981119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:35.226 [2024-11-26 17:56:16.981438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:35.226 [2024-11-26 17:56:16.981458] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:35.226 [2024-11-26 17:56:16.981771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:35.226 [2024-11-26 17:56:16.981939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:35.226 [2024-11-26 17:56:16.981958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:35.226 [2024-11-26 17:56:16.982136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.226 NewBaseBdev 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.226 17:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.226 [ 00:11:35.226 { 00:11:35.226 "name": "NewBaseBdev", 00:11:35.226 "aliases": [ 00:11:35.226 "b2a4ac4a-9e42-4f3d-877a-9b51165f7b11" 00:11:35.226 ], 00:11:35.226 "product_name": "Malloc disk", 00:11:35.226 "block_size": 512, 00:11:35.226 "num_blocks": 65536, 00:11:35.226 "uuid": "b2a4ac4a-9e42-4f3d-877a-9b51165f7b11", 00:11:35.226 "assigned_rate_limits": { 00:11:35.226 "rw_ios_per_sec": 0, 00:11:35.226 "rw_mbytes_per_sec": 0, 00:11:35.226 "r_mbytes_per_sec": 0, 00:11:35.226 "w_mbytes_per_sec": 0 00:11:35.226 }, 00:11:35.226 "claimed": true, 00:11:35.226 "claim_type": "exclusive_write", 00:11:35.226 "zoned": false, 00:11:35.226 "supported_io_types": { 00:11:35.226 "read": true, 00:11:35.226 "write": true, 00:11:35.226 "unmap": true, 00:11:35.226 "flush": true, 00:11:35.226 "reset": true, 00:11:35.226 "nvme_admin": false, 00:11:35.226 "nvme_io": false, 00:11:35.226 "nvme_io_md": false, 00:11:35.226 "write_zeroes": true, 00:11:35.226 "zcopy": true, 00:11:35.226 "get_zone_info": false, 00:11:35.226 "zone_management": false, 00:11:35.226 "zone_append": false, 00:11:35.226 "compare": false, 00:11:35.226 "compare_and_write": false, 00:11:35.226 "abort": true, 00:11:35.226 "seek_hole": false, 00:11:35.226 "seek_data": false, 00:11:35.226 "copy": true, 00:11:35.226 "nvme_iov_md": false 00:11:35.226 }, 00:11:35.226 "memory_domains": [ 00:11:35.226 { 00:11:35.226 "dma_device_id": "system", 00:11:35.226 "dma_device_type": 1 00:11:35.226 }, 00:11:35.226 { 00:11:35.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.226 "dma_device_type": 2 00:11:35.226 } 00:11:35.226 ], 00:11:35.226 "driver_specific": {} 00:11:35.226 } 00:11:35.226 ] 00:11:35.226 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.226 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.226 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:35.226 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.226 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.227 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.227 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.227 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.227 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.227 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.227 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.227 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.227 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.227 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.227 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.227 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.227 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.227 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.227 "name": "Existed_Raid", 00:11:35.227 "uuid": "52a65c2e-b4ea-400a-979e-a083bae5c156", 00:11:35.227 "strip_size_kb": 64, 00:11:35.227 "state": "online", 00:11:35.227 "raid_level": "concat", 00:11:35.227 "superblock": true, 00:11:35.227 "num_base_bdevs": 4, 00:11:35.227 "num_base_bdevs_discovered": 4, 00:11:35.227 "num_base_bdevs_operational": 4, 00:11:35.227 "base_bdevs_list": [ 00:11:35.227 { 00:11:35.227 "name": "NewBaseBdev", 00:11:35.227 "uuid": "b2a4ac4a-9e42-4f3d-877a-9b51165f7b11", 00:11:35.227 "is_configured": true, 00:11:35.227 "data_offset": 2048, 00:11:35.227 "data_size": 63488 00:11:35.227 }, 00:11:35.227 { 00:11:35.227 "name": "BaseBdev2", 00:11:35.227 "uuid": "0cb30a6d-4b88-4248-9c65-4937063bf69f", 00:11:35.227 "is_configured": true, 00:11:35.227 "data_offset": 2048, 00:11:35.227 "data_size": 63488 00:11:35.227 }, 00:11:35.227 { 00:11:35.227 "name": "BaseBdev3", 00:11:35.227 "uuid": "49e06e5d-7639-479a-b54c-1dc3117aeb6f", 00:11:35.227 "is_configured": true, 00:11:35.227 "data_offset": 2048, 00:11:35.227 "data_size": 63488 00:11:35.227 }, 00:11:35.227 { 00:11:35.227 "name": "BaseBdev4", 00:11:35.227 "uuid": "e19624e7-6643-49df-a53f-c50a2b9350f6", 00:11:35.227 "is_configured": true, 00:11:35.227 "data_offset": 2048, 00:11:35.227 "data_size": 63488 00:11:35.227 } 00:11:35.227 ] 00:11:35.227 }' 00:11:35.227 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.227 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.792 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:35.792 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:35.792 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.792 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.792 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.792 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.792 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:35.792 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.792 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.792 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.792 [2024-11-26 17:56:17.452727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.792 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.792 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.792 "name": "Existed_Raid", 00:11:35.792 "aliases": [ 00:11:35.792 "52a65c2e-b4ea-400a-979e-a083bae5c156" 00:11:35.792 ], 00:11:35.792 "product_name": "Raid Volume", 00:11:35.792 "block_size": 512, 00:11:35.792 "num_blocks": 253952, 00:11:35.792 "uuid": "52a65c2e-b4ea-400a-979e-a083bae5c156", 00:11:35.792 "assigned_rate_limits": { 00:11:35.792 "rw_ios_per_sec": 0, 00:11:35.792 "rw_mbytes_per_sec": 0, 00:11:35.792 "r_mbytes_per_sec": 0, 00:11:35.792 "w_mbytes_per_sec": 0 00:11:35.792 }, 00:11:35.792 "claimed": false, 00:11:35.792 "zoned": false, 00:11:35.792 "supported_io_types": { 00:11:35.793 "read": true, 00:11:35.793 "write": true, 00:11:35.793 "unmap": true, 00:11:35.793 "flush": true, 00:11:35.793 "reset": true, 00:11:35.793 "nvme_admin": false, 00:11:35.793 "nvme_io": false, 00:11:35.793 "nvme_io_md": false, 00:11:35.793 "write_zeroes": true, 00:11:35.793 "zcopy": false, 00:11:35.793 "get_zone_info": false, 00:11:35.793 "zone_management": false, 00:11:35.793 "zone_append": false, 00:11:35.793 "compare": false, 00:11:35.793 "compare_and_write": false, 00:11:35.793 "abort": false, 00:11:35.793 "seek_hole": false, 00:11:35.793 "seek_data": false, 00:11:35.793 "copy": false, 00:11:35.793 "nvme_iov_md": false 00:11:35.793 }, 00:11:35.793 "memory_domains": [ 00:11:35.793 { 00:11:35.793 "dma_device_id": "system", 00:11:35.793 "dma_device_type": 1 00:11:35.793 }, 00:11:35.793 { 00:11:35.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.793 "dma_device_type": 2 00:11:35.793 }, 00:11:35.793 { 00:11:35.793 "dma_device_id": "system", 00:11:35.793 "dma_device_type": 1 00:11:35.793 }, 00:11:35.793 { 00:11:35.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.793 "dma_device_type": 2 00:11:35.793 }, 00:11:35.793 { 00:11:35.793 "dma_device_id": "system", 00:11:35.793 "dma_device_type": 1 00:11:35.793 }, 00:11:35.793 { 00:11:35.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.793 "dma_device_type": 2 00:11:35.793 }, 00:11:35.793 { 00:11:35.793 "dma_device_id": "system", 00:11:35.793 "dma_device_type": 1 00:11:35.793 }, 00:11:35.793 { 00:11:35.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.793 "dma_device_type": 2 00:11:35.793 } 00:11:35.793 ], 00:11:35.793 "driver_specific": { 00:11:35.793 "raid": { 00:11:35.793 "uuid": "52a65c2e-b4ea-400a-979e-a083bae5c156", 00:11:35.793 "strip_size_kb": 64, 00:11:35.793 "state": "online", 00:11:35.793 "raid_level": "concat", 00:11:35.793 "superblock": true, 00:11:35.793 "num_base_bdevs": 4, 00:11:35.793 "num_base_bdevs_discovered": 4, 00:11:35.793 "num_base_bdevs_operational": 4, 00:11:35.793 "base_bdevs_list": [ 00:11:35.793 { 00:11:35.793 "name": "NewBaseBdev", 00:11:35.793 "uuid": "b2a4ac4a-9e42-4f3d-877a-9b51165f7b11", 00:11:35.793 "is_configured": true, 00:11:35.793 "data_offset": 2048, 00:11:35.793 "data_size": 63488 00:11:35.793 }, 00:11:35.793 { 00:11:35.793 "name": "BaseBdev2", 00:11:35.793 "uuid": "0cb30a6d-4b88-4248-9c65-4937063bf69f", 00:11:35.793 "is_configured": true, 00:11:35.793 "data_offset": 2048, 00:11:35.793 "data_size": 63488 00:11:35.793 }, 00:11:35.793 { 00:11:35.793 "name": "BaseBdev3", 00:11:35.793 "uuid": "49e06e5d-7639-479a-b54c-1dc3117aeb6f", 00:11:35.793 "is_configured": true, 00:11:35.793 "data_offset": 2048, 00:11:35.793 "data_size": 63488 00:11:35.793 }, 00:11:35.793 { 00:11:35.793 "name": "BaseBdev4", 00:11:35.793 "uuid": "e19624e7-6643-49df-a53f-c50a2b9350f6", 00:11:35.793 "is_configured": true, 00:11:35.793 "data_offset": 2048, 00:11:35.793 "data_size": 63488 00:11:35.793 } 00:11:35.793 ] 00:11:35.793 } 00:11:35.793 } 00:11:35.793 }' 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:35.793 BaseBdev2 00:11:35.793 BaseBdev3 00:11:35.793 BaseBdev4' 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.793 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.051 [2024-11-26 17:56:17.759854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.051 [2024-11-26 17:56:17.759901] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.051 [2024-11-26 17:56:17.760008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.051 [2024-11-26 17:56:17.760106] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.051 [2024-11-26 17:56:17.760123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72234 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72234 ']' 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72234 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72234 00:11:36.051 killing process with pid 72234 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72234' 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72234 00:11:36.051 [2024-11-26 17:56:17.807955] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:36.051 17:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72234 00:11:36.616 [2024-11-26 17:56:18.294308] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.011 17:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:38.011 00:11:38.011 real 0m12.410s 00:11:38.011 user 0m19.492s 00:11:38.011 sys 0m2.102s 00:11:38.011 17:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.011 17:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.011 ************************************ 00:11:38.011 END TEST raid_state_function_test_sb 00:11:38.011 ************************************ 00:11:38.011 17:56:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:38.011 17:56:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.011 17:56:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.011 17:56:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.011 ************************************ 00:11:38.011 START TEST raid_superblock_test 00:11:38.011 ************************************ 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72911 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72911 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72911 ']' 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.011 17:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.011 [2024-11-26 17:56:19.819895] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:11:38.011 [2024-11-26 17:56:19.820063] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72911 ] 00:11:38.269 [2024-11-26 17:56:19.996788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.527 [2024-11-26 17:56:20.134285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.527 [2024-11-26 17:56:20.383301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.527 [2024-11-26 17:56:20.383355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.094 malloc1 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.094 [2024-11-26 17:56:20.799147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:39.094 [2024-11-26 17:56:20.799218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.094 [2024-11-26 17:56:20.799244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:39.094 [2024-11-26 17:56:20.799256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.094 [2024-11-26 17:56:20.801786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.094 [2024-11-26 17:56:20.801830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:39.094 pt1 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.094 malloc2 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.094 [2024-11-26 17:56:20.861966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:39.094 [2024-11-26 17:56:20.862052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.094 [2024-11-26 17:56:20.862085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:39.094 [2024-11-26 17:56:20.862097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.094 [2024-11-26 17:56:20.864611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.094 [2024-11-26 17:56:20.864655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:39.094 pt2 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.094 malloc3 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.094 [2024-11-26 17:56:20.944878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:39.094 [2024-11-26 17:56:20.944944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.094 [2024-11-26 17:56:20.944971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:39.094 [2024-11-26 17:56:20.944982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.094 [2024-11-26 17:56:20.947502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.094 [2024-11-26 17:56:20.947544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:39.094 pt3 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.094 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.353 malloc4 00:11:39.353 17:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.353 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:39.353 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.353 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.353 [2024-11-26 17:56:21.007790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:39.353 [2024-11-26 17:56:21.007861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.354 [2024-11-26 17:56:21.007886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:39.354 [2024-11-26 17:56:21.007897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.354 [2024-11-26 17:56:21.010387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.354 [2024-11-26 17:56:21.010430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:39.354 pt4 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.354 [2024-11-26 17:56:21.019797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:39.354 [2024-11-26 17:56:21.021919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:39.354 [2024-11-26 17:56:21.022044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:39.354 [2024-11-26 17:56:21.022123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:39.354 [2024-11-26 17:56:21.022367] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:39.354 [2024-11-26 17:56:21.022390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:39.354 [2024-11-26 17:56:21.022703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:39.354 [2024-11-26 17:56:21.022908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:39.354 [2024-11-26 17:56:21.022932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:39.354 [2024-11-26 17:56:21.023139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.354 "name": "raid_bdev1", 00:11:39.354 "uuid": "3ff5cc26-a592-4465-bd75-ce326ac1ef11", 00:11:39.354 "strip_size_kb": 64, 00:11:39.354 "state": "online", 00:11:39.354 "raid_level": "concat", 00:11:39.354 "superblock": true, 00:11:39.354 "num_base_bdevs": 4, 00:11:39.354 "num_base_bdevs_discovered": 4, 00:11:39.354 "num_base_bdevs_operational": 4, 00:11:39.354 "base_bdevs_list": [ 00:11:39.354 { 00:11:39.354 "name": "pt1", 00:11:39.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.354 "is_configured": true, 00:11:39.354 "data_offset": 2048, 00:11:39.354 "data_size": 63488 00:11:39.354 }, 00:11:39.354 { 00:11:39.354 "name": "pt2", 00:11:39.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.354 "is_configured": true, 00:11:39.354 "data_offset": 2048, 00:11:39.354 "data_size": 63488 00:11:39.354 }, 00:11:39.354 { 00:11:39.354 "name": "pt3", 00:11:39.354 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.354 "is_configured": true, 00:11:39.354 "data_offset": 2048, 00:11:39.354 "data_size": 63488 00:11:39.354 }, 00:11:39.354 { 00:11:39.354 "name": "pt4", 00:11:39.354 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:39.354 "is_configured": true, 00:11:39.354 "data_offset": 2048, 00:11:39.354 "data_size": 63488 00:11:39.354 } 00:11:39.354 ] 00:11:39.354 }' 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.354 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.923 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:39.923 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:39.923 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:39.923 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:39.923 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:39.923 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:39.923 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:39.923 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:39.923 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.923 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.923 [2024-11-26 17:56:21.555334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.923 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.923 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:39.923 "name": "raid_bdev1", 00:11:39.923 "aliases": [ 00:11:39.923 "3ff5cc26-a592-4465-bd75-ce326ac1ef11" 00:11:39.923 ], 00:11:39.923 "product_name": "Raid Volume", 00:11:39.923 "block_size": 512, 00:11:39.923 "num_blocks": 253952, 00:11:39.923 "uuid": "3ff5cc26-a592-4465-bd75-ce326ac1ef11", 00:11:39.923 "assigned_rate_limits": { 00:11:39.923 "rw_ios_per_sec": 0, 00:11:39.923 "rw_mbytes_per_sec": 0, 00:11:39.923 "r_mbytes_per_sec": 0, 00:11:39.923 "w_mbytes_per_sec": 0 00:11:39.923 }, 00:11:39.923 "claimed": false, 00:11:39.923 "zoned": false, 00:11:39.923 "supported_io_types": { 00:11:39.923 "read": true, 00:11:39.923 "write": true, 00:11:39.923 "unmap": true, 00:11:39.923 "flush": true, 00:11:39.923 "reset": true, 00:11:39.923 "nvme_admin": false, 00:11:39.923 "nvme_io": false, 00:11:39.923 "nvme_io_md": false, 00:11:39.923 "write_zeroes": true, 00:11:39.923 "zcopy": false, 00:11:39.923 "get_zone_info": false, 00:11:39.923 "zone_management": false, 00:11:39.923 "zone_append": false, 00:11:39.923 "compare": false, 00:11:39.923 "compare_and_write": false, 00:11:39.923 "abort": false, 00:11:39.923 "seek_hole": false, 00:11:39.923 "seek_data": false, 00:11:39.923 "copy": false, 00:11:39.923 "nvme_iov_md": false 00:11:39.923 }, 00:11:39.923 "memory_domains": [ 00:11:39.923 { 00:11:39.923 "dma_device_id": "system", 00:11:39.923 "dma_device_type": 1 00:11:39.923 }, 00:11:39.923 { 00:11:39.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.923 "dma_device_type": 2 00:11:39.923 }, 00:11:39.923 { 00:11:39.923 "dma_device_id": "system", 00:11:39.923 "dma_device_type": 1 00:11:39.923 }, 00:11:39.923 { 00:11:39.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.923 "dma_device_type": 2 00:11:39.923 }, 00:11:39.923 { 00:11:39.923 "dma_device_id": "system", 00:11:39.923 "dma_device_type": 1 00:11:39.923 }, 00:11:39.923 { 00:11:39.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.923 "dma_device_type": 2 00:11:39.923 }, 00:11:39.923 { 00:11:39.923 "dma_device_id": "system", 00:11:39.923 "dma_device_type": 1 00:11:39.923 }, 00:11:39.923 { 00:11:39.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.924 "dma_device_type": 2 00:11:39.924 } 00:11:39.924 ], 00:11:39.924 "driver_specific": { 00:11:39.924 "raid": { 00:11:39.924 "uuid": "3ff5cc26-a592-4465-bd75-ce326ac1ef11", 00:11:39.924 "strip_size_kb": 64, 00:11:39.924 "state": "online", 00:11:39.924 "raid_level": "concat", 00:11:39.924 "superblock": true, 00:11:39.924 "num_base_bdevs": 4, 00:11:39.924 "num_base_bdevs_discovered": 4, 00:11:39.924 "num_base_bdevs_operational": 4, 00:11:39.924 "base_bdevs_list": [ 00:11:39.924 { 00:11:39.924 "name": "pt1", 00:11:39.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.924 "is_configured": true, 00:11:39.924 "data_offset": 2048, 00:11:39.924 "data_size": 63488 00:11:39.924 }, 00:11:39.924 { 00:11:39.924 "name": "pt2", 00:11:39.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.924 "is_configured": true, 00:11:39.924 "data_offset": 2048, 00:11:39.924 "data_size": 63488 00:11:39.924 }, 00:11:39.924 { 00:11:39.924 "name": "pt3", 00:11:39.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.924 "is_configured": true, 00:11:39.924 "data_offset": 2048, 00:11:39.924 "data_size": 63488 00:11:39.924 }, 00:11:39.924 { 00:11:39.924 "name": "pt4", 00:11:39.924 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:39.924 "is_configured": true, 00:11:39.924 "data_offset": 2048, 00:11:39.924 "data_size": 63488 00:11:39.924 } 00:11:39.924 ] 00:11:39.924 } 00:11:39.924 } 00:11:39.924 }' 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:39.924 pt2 00:11:39.924 pt3 00:11:39.924 pt4' 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.924 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.184 [2024-11-26 17:56:21.890695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3ff5cc26-a592-4465-bd75-ce326ac1ef11 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3ff5cc26-a592-4465-bd75-ce326ac1ef11 ']' 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.184 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.185 [2024-11-26 17:56:21.934272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.185 [2024-11-26 17:56:21.934308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.185 [2024-11-26 17:56:21.934412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.185 [2024-11-26 17:56:21.934495] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.185 [2024-11-26 17:56:21.934512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.185 17:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.185 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.185 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.185 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:40.185 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.185 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.185 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.185 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.185 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:40.185 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.185 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.185 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.185 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:40.185 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.185 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:40.185 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.444 [2024-11-26 17:56:22.098072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:40.444 [2024-11-26 17:56:22.100274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:40.444 [2024-11-26 17:56:22.100393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:40.444 [2024-11-26 17:56:22.100466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:40.444 [2024-11-26 17:56:22.100563] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:40.444 [2024-11-26 17:56:22.100685] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:40.444 [2024-11-26 17:56:22.100762] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:40.444 [2024-11-26 17:56:22.100794] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:40.444 [2024-11-26 17:56:22.100812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.444 [2024-11-26 17:56:22.100828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:40.444 request: 00:11:40.444 { 00:11:40.444 "name": "raid_bdev1", 00:11:40.444 "raid_level": "concat", 00:11:40.444 "base_bdevs": [ 00:11:40.444 "malloc1", 00:11:40.444 "malloc2", 00:11:40.444 "malloc3", 00:11:40.444 "malloc4" 00:11:40.444 ], 00:11:40.444 "strip_size_kb": 64, 00:11:40.444 "superblock": false, 00:11:40.444 "method": "bdev_raid_create", 00:11:40.444 "req_id": 1 00:11:40.444 } 00:11:40.444 Got JSON-RPC error response 00:11:40.444 response: 00:11:40.444 { 00:11:40.444 "code": -17, 00:11:40.444 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:40.444 } 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.444 [2024-11-26 17:56:22.157901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:40.444 [2024-11-26 17:56:22.158027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.444 [2024-11-26 17:56:22.158073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:40.444 [2024-11-26 17:56:22.158152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.444 [2024-11-26 17:56:22.160686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.444 [2024-11-26 17:56:22.160773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:40.444 [2024-11-26 17:56:22.160897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:40.444 [2024-11-26 17:56:22.161006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:40.444 pt1 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.444 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.445 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.445 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.445 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.445 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.445 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.445 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.445 "name": "raid_bdev1", 00:11:40.445 "uuid": "3ff5cc26-a592-4465-bd75-ce326ac1ef11", 00:11:40.445 "strip_size_kb": 64, 00:11:40.445 "state": "configuring", 00:11:40.445 "raid_level": "concat", 00:11:40.445 "superblock": true, 00:11:40.445 "num_base_bdevs": 4, 00:11:40.445 "num_base_bdevs_discovered": 1, 00:11:40.445 "num_base_bdevs_operational": 4, 00:11:40.445 "base_bdevs_list": [ 00:11:40.445 { 00:11:40.445 "name": "pt1", 00:11:40.445 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.445 "is_configured": true, 00:11:40.445 "data_offset": 2048, 00:11:40.445 "data_size": 63488 00:11:40.445 }, 00:11:40.445 { 00:11:40.445 "name": null, 00:11:40.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.445 "is_configured": false, 00:11:40.445 "data_offset": 2048, 00:11:40.445 "data_size": 63488 00:11:40.445 }, 00:11:40.445 { 00:11:40.445 "name": null, 00:11:40.445 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.445 "is_configured": false, 00:11:40.445 "data_offset": 2048, 00:11:40.445 "data_size": 63488 00:11:40.445 }, 00:11:40.445 { 00:11:40.445 "name": null, 00:11:40.445 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:40.445 "is_configured": false, 00:11:40.445 "data_offset": 2048, 00:11:40.445 "data_size": 63488 00:11:40.445 } 00:11:40.445 ] 00:11:40.445 }' 00:11:40.445 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.445 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.012 [2024-11-26 17:56:22.645248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:41.012 [2024-11-26 17:56:22.645341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.012 [2024-11-26 17:56:22.645366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:41.012 [2024-11-26 17:56:22.645379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.012 [2024-11-26 17:56:22.645880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.012 [2024-11-26 17:56:22.645919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:41.012 [2024-11-26 17:56:22.646031] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:41.012 [2024-11-26 17:56:22.646061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:41.012 pt2 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.012 [2024-11-26 17:56:22.657249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.012 "name": "raid_bdev1", 00:11:41.012 "uuid": "3ff5cc26-a592-4465-bd75-ce326ac1ef11", 00:11:41.012 "strip_size_kb": 64, 00:11:41.012 "state": "configuring", 00:11:41.012 "raid_level": "concat", 00:11:41.012 "superblock": true, 00:11:41.012 "num_base_bdevs": 4, 00:11:41.012 "num_base_bdevs_discovered": 1, 00:11:41.012 "num_base_bdevs_operational": 4, 00:11:41.012 "base_bdevs_list": [ 00:11:41.012 { 00:11:41.012 "name": "pt1", 00:11:41.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:41.012 "is_configured": true, 00:11:41.012 "data_offset": 2048, 00:11:41.012 "data_size": 63488 00:11:41.012 }, 00:11:41.012 { 00:11:41.012 "name": null, 00:11:41.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.012 "is_configured": false, 00:11:41.012 "data_offset": 0, 00:11:41.012 "data_size": 63488 00:11:41.012 }, 00:11:41.012 { 00:11:41.012 "name": null, 00:11:41.012 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.012 "is_configured": false, 00:11:41.012 "data_offset": 2048, 00:11:41.012 "data_size": 63488 00:11:41.012 }, 00:11:41.012 { 00:11:41.012 "name": null, 00:11:41.012 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:41.012 "is_configured": false, 00:11:41.012 "data_offset": 2048, 00:11:41.012 "data_size": 63488 00:11:41.012 } 00:11:41.012 ] 00:11:41.012 }' 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.012 17:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.272 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:41.272 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:41.272 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:41.272 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.272 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.272 [2024-11-26 17:56:23.096739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:41.272 [2024-11-26 17:56:23.096867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.272 [2024-11-26 17:56:23.096922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:41.272 [2024-11-26 17:56:23.096958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.272 [2024-11-26 17:56:23.097546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.272 [2024-11-26 17:56:23.097626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:41.272 [2024-11-26 17:56:23.097758] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:41.272 [2024-11-26 17:56:23.097819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:41.272 pt2 00:11:41.272 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.272 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:41.272 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:41.272 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:41.272 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.272 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.272 [2024-11-26 17:56:23.108675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:41.272 [2024-11-26 17:56:23.108772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.272 [2024-11-26 17:56:23.108822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:41.272 [2024-11-26 17:56:23.108862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.272 [2024-11-26 17:56:23.109360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.272 [2024-11-26 17:56:23.109429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:41.272 [2024-11-26 17:56:23.109536] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:41.273 [2024-11-26 17:56:23.109602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:41.273 pt3 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.273 [2024-11-26 17:56:23.120634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:41.273 [2024-11-26 17:56:23.120686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.273 [2024-11-26 17:56:23.120707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:41.273 [2024-11-26 17:56:23.120716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.273 [2024-11-26 17:56:23.121187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.273 [2024-11-26 17:56:23.121207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:41.273 [2024-11-26 17:56:23.121282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:41.273 [2024-11-26 17:56:23.121307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:41.273 [2024-11-26 17:56:23.121456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:41.273 [2024-11-26 17:56:23.121473] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:41.273 [2024-11-26 17:56:23.121748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:41.273 [2024-11-26 17:56:23.121911] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:41.273 [2024-11-26 17:56:23.121925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:41.273 [2024-11-26 17:56:23.122092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.273 pt4 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.273 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.531 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.531 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.531 "name": "raid_bdev1", 00:11:41.531 "uuid": "3ff5cc26-a592-4465-bd75-ce326ac1ef11", 00:11:41.531 "strip_size_kb": 64, 00:11:41.531 "state": "online", 00:11:41.531 "raid_level": "concat", 00:11:41.531 "superblock": true, 00:11:41.531 "num_base_bdevs": 4, 00:11:41.531 "num_base_bdevs_discovered": 4, 00:11:41.531 "num_base_bdevs_operational": 4, 00:11:41.531 "base_bdevs_list": [ 00:11:41.531 { 00:11:41.531 "name": "pt1", 00:11:41.531 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:41.531 "is_configured": true, 00:11:41.531 "data_offset": 2048, 00:11:41.531 "data_size": 63488 00:11:41.531 }, 00:11:41.531 { 00:11:41.531 "name": "pt2", 00:11:41.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.531 "is_configured": true, 00:11:41.531 "data_offset": 2048, 00:11:41.531 "data_size": 63488 00:11:41.531 }, 00:11:41.531 { 00:11:41.531 "name": "pt3", 00:11:41.531 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.531 "is_configured": true, 00:11:41.531 "data_offset": 2048, 00:11:41.531 "data_size": 63488 00:11:41.531 }, 00:11:41.531 { 00:11:41.531 "name": "pt4", 00:11:41.531 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:41.531 "is_configured": true, 00:11:41.531 "data_offset": 2048, 00:11:41.531 "data_size": 63488 00:11:41.531 } 00:11:41.531 ] 00:11:41.531 }' 00:11:41.531 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.531 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.791 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:41.791 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:41.791 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.791 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.791 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.791 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.791 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:41.791 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.791 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.791 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.791 [2024-11-26 17:56:23.632240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.050 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.050 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.050 "name": "raid_bdev1", 00:11:42.050 "aliases": [ 00:11:42.050 "3ff5cc26-a592-4465-bd75-ce326ac1ef11" 00:11:42.050 ], 00:11:42.050 "product_name": "Raid Volume", 00:11:42.050 "block_size": 512, 00:11:42.050 "num_blocks": 253952, 00:11:42.050 "uuid": "3ff5cc26-a592-4465-bd75-ce326ac1ef11", 00:11:42.050 "assigned_rate_limits": { 00:11:42.050 "rw_ios_per_sec": 0, 00:11:42.050 "rw_mbytes_per_sec": 0, 00:11:42.050 "r_mbytes_per_sec": 0, 00:11:42.050 "w_mbytes_per_sec": 0 00:11:42.050 }, 00:11:42.050 "claimed": false, 00:11:42.050 "zoned": false, 00:11:42.050 "supported_io_types": { 00:11:42.050 "read": true, 00:11:42.050 "write": true, 00:11:42.050 "unmap": true, 00:11:42.050 "flush": true, 00:11:42.050 "reset": true, 00:11:42.050 "nvme_admin": false, 00:11:42.050 "nvme_io": false, 00:11:42.050 "nvme_io_md": false, 00:11:42.050 "write_zeroes": true, 00:11:42.050 "zcopy": false, 00:11:42.050 "get_zone_info": false, 00:11:42.050 "zone_management": false, 00:11:42.050 "zone_append": false, 00:11:42.050 "compare": false, 00:11:42.050 "compare_and_write": false, 00:11:42.050 "abort": false, 00:11:42.050 "seek_hole": false, 00:11:42.050 "seek_data": false, 00:11:42.050 "copy": false, 00:11:42.050 "nvme_iov_md": false 00:11:42.050 }, 00:11:42.050 "memory_domains": [ 00:11:42.050 { 00:11:42.050 "dma_device_id": "system", 00:11:42.050 "dma_device_type": 1 00:11:42.050 }, 00:11:42.050 { 00:11:42.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.050 "dma_device_type": 2 00:11:42.050 }, 00:11:42.050 { 00:11:42.050 "dma_device_id": "system", 00:11:42.050 "dma_device_type": 1 00:11:42.050 }, 00:11:42.050 { 00:11:42.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.050 "dma_device_type": 2 00:11:42.050 }, 00:11:42.050 { 00:11:42.050 "dma_device_id": "system", 00:11:42.050 "dma_device_type": 1 00:11:42.050 }, 00:11:42.050 { 00:11:42.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.050 "dma_device_type": 2 00:11:42.050 }, 00:11:42.050 { 00:11:42.050 "dma_device_id": "system", 00:11:42.050 "dma_device_type": 1 00:11:42.050 }, 00:11:42.050 { 00:11:42.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.050 "dma_device_type": 2 00:11:42.050 } 00:11:42.050 ], 00:11:42.050 "driver_specific": { 00:11:42.050 "raid": { 00:11:42.050 "uuid": "3ff5cc26-a592-4465-bd75-ce326ac1ef11", 00:11:42.050 "strip_size_kb": 64, 00:11:42.050 "state": "online", 00:11:42.051 "raid_level": "concat", 00:11:42.051 "superblock": true, 00:11:42.051 "num_base_bdevs": 4, 00:11:42.051 "num_base_bdevs_discovered": 4, 00:11:42.051 "num_base_bdevs_operational": 4, 00:11:42.051 "base_bdevs_list": [ 00:11:42.051 { 00:11:42.051 "name": "pt1", 00:11:42.051 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:42.051 "is_configured": true, 00:11:42.051 "data_offset": 2048, 00:11:42.051 "data_size": 63488 00:11:42.051 }, 00:11:42.051 { 00:11:42.051 "name": "pt2", 00:11:42.051 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.051 "is_configured": true, 00:11:42.051 "data_offset": 2048, 00:11:42.051 "data_size": 63488 00:11:42.051 }, 00:11:42.051 { 00:11:42.051 "name": "pt3", 00:11:42.051 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.051 "is_configured": true, 00:11:42.051 "data_offset": 2048, 00:11:42.051 "data_size": 63488 00:11:42.051 }, 00:11:42.051 { 00:11:42.051 "name": "pt4", 00:11:42.051 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:42.051 "is_configured": true, 00:11:42.051 "data_offset": 2048, 00:11:42.051 "data_size": 63488 00:11:42.051 } 00:11:42.051 ] 00:11:42.051 } 00:11:42.051 } 00:11:42.051 }' 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:42.051 pt2 00:11:42.051 pt3 00:11:42.051 pt4' 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.051 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.309 [2024-11-26 17:56:23.943626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3ff5cc26-a592-4465-bd75-ce326ac1ef11 '!=' 3ff5cc26-a592-4465-bd75-ce326ac1ef11 ']' 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72911 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72911 ']' 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72911 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.309 17:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72911 00:11:42.309 17:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.309 17:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.309 17:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72911' 00:11:42.309 killing process with pid 72911 00:11:42.309 17:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72911 00:11:42.309 [2024-11-26 17:56:24.024005] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.309 [2024-11-26 17:56:24.024116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.309 [2024-11-26 17:56:24.024204] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.309 [2024-11-26 17:56:24.024216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:42.309 17:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72911 00:11:42.875 [2024-11-26 17:56:24.511699] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.250 17:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:44.250 00:11:44.250 real 0m6.160s 00:11:44.250 user 0m8.747s 00:11:44.250 sys 0m1.029s 00:11:44.250 ************************************ 00:11:44.250 END TEST raid_superblock_test 00:11:44.250 ************************************ 00:11:44.250 17:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.250 17:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.250 17:56:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:44.250 17:56:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:44.250 17:56:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.250 17:56:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.250 ************************************ 00:11:44.250 START TEST raid_read_error_test 00:11:44.250 ************************************ 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RTyWwBkPIr 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73180 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73180 00:11:44.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73180 ']' 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.251 17:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:44.251 [2024-11-26 17:56:26.058756] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:11:44.251 [2024-11-26 17:56:26.058892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73180 ] 00:11:44.509 [2024-11-26 17:56:26.220922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.509 [2024-11-26 17:56:26.354795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.767 [2024-11-26 17:56:26.597569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.767 [2024-11-26 17:56:26.597620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.333 17:56:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.333 17:56:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:45.333 17:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.333 17:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:45.333 17:56:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.333 17:56:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.333 BaseBdev1_malloc 00:11:45.333 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.333 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:45.333 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.334 true 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.334 [2024-11-26 17:56:27.036419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:45.334 [2024-11-26 17:56:27.036488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.334 [2024-11-26 17:56:27.036515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:45.334 [2024-11-26 17:56:27.036528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.334 [2024-11-26 17:56:27.038986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.334 [2024-11-26 17:56:27.039126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:45.334 BaseBdev1 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.334 BaseBdev2_malloc 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.334 true 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.334 [2024-11-26 17:56:27.107667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:45.334 [2024-11-26 17:56:27.107739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.334 [2024-11-26 17:56:27.107763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:45.334 [2024-11-26 17:56:27.107776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.334 [2024-11-26 17:56:27.110281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.334 [2024-11-26 17:56:27.110398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:45.334 BaseBdev2 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.334 BaseBdev3_malloc 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.334 true 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.334 [2024-11-26 17:56:27.184818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:45.334 [2024-11-26 17:56:27.184883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.334 [2024-11-26 17:56:27.184906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:45.334 [2024-11-26 17:56:27.184918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.334 [2024-11-26 17:56:27.187379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.334 [2024-11-26 17:56:27.187424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:45.334 BaseBdev3 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.334 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.593 BaseBdev4_malloc 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.593 true 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.593 [2024-11-26 17:56:27.256646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:45.593 [2024-11-26 17:56:27.256712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.593 [2024-11-26 17:56:27.256735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:45.593 [2024-11-26 17:56:27.256747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.593 [2024-11-26 17:56:27.259209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.593 [2024-11-26 17:56:27.259255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:45.593 BaseBdev4 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.593 [2024-11-26 17:56:27.268722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.593 [2024-11-26 17:56:27.270868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.593 [2024-11-26 17:56:27.271063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:45.593 [2024-11-26 17:56:27.271158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:45.593 [2024-11-26 17:56:27.271472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:45.593 [2024-11-26 17:56:27.271493] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:45.593 [2024-11-26 17:56:27.271827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:45.593 [2024-11-26 17:56:27.272043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:45.593 [2024-11-26 17:56:27.272057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:45.593 [2024-11-26 17:56:27.272272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.593 "name": "raid_bdev1", 00:11:45.593 "uuid": "57874b27-0154-4b10-85a8-8bf731e11ffe", 00:11:45.593 "strip_size_kb": 64, 00:11:45.593 "state": "online", 00:11:45.593 "raid_level": "concat", 00:11:45.593 "superblock": true, 00:11:45.593 "num_base_bdevs": 4, 00:11:45.593 "num_base_bdevs_discovered": 4, 00:11:45.593 "num_base_bdevs_operational": 4, 00:11:45.593 "base_bdevs_list": [ 00:11:45.593 { 00:11:45.593 "name": "BaseBdev1", 00:11:45.593 "uuid": "7f77080b-a20f-5b08-8eb8-57acfaf4ac4c", 00:11:45.593 "is_configured": true, 00:11:45.593 "data_offset": 2048, 00:11:45.593 "data_size": 63488 00:11:45.593 }, 00:11:45.593 { 00:11:45.593 "name": "BaseBdev2", 00:11:45.593 "uuid": "09f4414b-c70e-5bf5-a5ab-82afa68d380b", 00:11:45.593 "is_configured": true, 00:11:45.593 "data_offset": 2048, 00:11:45.593 "data_size": 63488 00:11:45.593 }, 00:11:45.593 { 00:11:45.593 "name": "BaseBdev3", 00:11:45.593 "uuid": "b1e52d1a-bc13-5698-8191-c1151c133b6f", 00:11:45.593 "is_configured": true, 00:11:45.593 "data_offset": 2048, 00:11:45.593 "data_size": 63488 00:11:45.593 }, 00:11:45.593 { 00:11:45.593 "name": "BaseBdev4", 00:11:45.593 "uuid": "bf224726-c5c1-5c11-b2aa-3374af998332", 00:11:45.593 "is_configured": true, 00:11:45.593 "data_offset": 2048, 00:11:45.593 "data_size": 63488 00:11:45.593 } 00:11:45.593 ] 00:11:45.593 }' 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.593 17:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.159 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:46.159 17:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:46.159 [2024-11-26 17:56:27.873460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.094 17:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.095 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.095 17:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.095 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.095 "name": "raid_bdev1", 00:11:47.095 "uuid": "57874b27-0154-4b10-85a8-8bf731e11ffe", 00:11:47.095 "strip_size_kb": 64, 00:11:47.095 "state": "online", 00:11:47.095 "raid_level": "concat", 00:11:47.095 "superblock": true, 00:11:47.095 "num_base_bdevs": 4, 00:11:47.095 "num_base_bdevs_discovered": 4, 00:11:47.095 "num_base_bdevs_operational": 4, 00:11:47.095 "base_bdevs_list": [ 00:11:47.095 { 00:11:47.095 "name": "BaseBdev1", 00:11:47.095 "uuid": "7f77080b-a20f-5b08-8eb8-57acfaf4ac4c", 00:11:47.095 "is_configured": true, 00:11:47.095 "data_offset": 2048, 00:11:47.095 "data_size": 63488 00:11:47.095 }, 00:11:47.095 { 00:11:47.095 "name": "BaseBdev2", 00:11:47.095 "uuid": "09f4414b-c70e-5bf5-a5ab-82afa68d380b", 00:11:47.095 "is_configured": true, 00:11:47.095 "data_offset": 2048, 00:11:47.095 "data_size": 63488 00:11:47.095 }, 00:11:47.095 { 00:11:47.095 "name": "BaseBdev3", 00:11:47.095 "uuid": "b1e52d1a-bc13-5698-8191-c1151c133b6f", 00:11:47.095 "is_configured": true, 00:11:47.095 "data_offset": 2048, 00:11:47.095 "data_size": 63488 00:11:47.095 }, 00:11:47.095 { 00:11:47.095 "name": "BaseBdev4", 00:11:47.095 "uuid": "bf224726-c5c1-5c11-b2aa-3374af998332", 00:11:47.095 "is_configured": true, 00:11:47.095 "data_offset": 2048, 00:11:47.095 "data_size": 63488 00:11:47.095 } 00:11:47.095 ] 00:11:47.095 }' 00:11:47.095 17:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.095 17:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.662 17:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:47.662 17:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.662 17:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.662 [2024-11-26 17:56:29.270738] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.662 [2024-11-26 17:56:29.270778] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.662 [2024-11-26 17:56:29.274345] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.662 [2024-11-26 17:56:29.274470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.662 [2024-11-26 17:56:29.274578] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.662 [2024-11-26 17:56:29.274646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:47.662 { 00:11:47.662 "results": [ 00:11:47.662 { 00:11:47.662 "job": "raid_bdev1", 00:11:47.662 "core_mask": "0x1", 00:11:47.662 "workload": "randrw", 00:11:47.662 "percentage": 50, 00:11:47.662 "status": "finished", 00:11:47.662 "queue_depth": 1, 00:11:47.662 "io_size": 131072, 00:11:47.662 "runtime": 1.397766, 00:11:47.662 "iops": 12819.026932977336, 00:11:47.662 "mibps": 1602.378366622167, 00:11:47.662 "io_failed": 1, 00:11:47.662 "io_timeout": 0, 00:11:47.662 "avg_latency_us": 107.6041643972354, 00:11:47.662 "min_latency_us": 34.20786026200874, 00:11:47.662 "max_latency_us": 1788.646288209607 00:11:47.662 } 00:11:47.662 ], 00:11:47.662 "core_count": 1 00:11:47.662 } 00:11:47.662 17:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.662 17:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73180 00:11:47.662 17:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73180 ']' 00:11:47.662 17:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73180 00:11:47.662 17:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:47.662 17:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.663 17:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73180 00:11:47.663 17:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.663 17:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.663 17:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73180' 00:11:47.663 killing process with pid 73180 00:11:47.663 17:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73180 00:11:47.663 [2024-11-26 17:56:29.311448] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.663 17:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73180 00:11:47.922 [2024-11-26 17:56:29.716850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.827 17:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RTyWwBkPIr 00:11:49.827 17:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:49.827 17:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:49.827 17:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:49.827 17:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:49.827 17:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:49.827 17:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:49.827 17:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:49.827 00:11:49.827 real 0m5.233s 00:11:49.827 user 0m6.201s 00:11:49.827 sys 0m0.624s 00:11:49.827 17:56:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.827 ************************************ 00:11:49.827 END TEST raid_read_error_test 00:11:49.827 ************************************ 00:11:49.827 17:56:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.827 17:56:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:49.827 17:56:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:49.827 17:56:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.827 17:56:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.827 ************************************ 00:11:49.827 START TEST raid_write_error_test 00:11:49.827 ************************************ 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:49.827 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:49.828 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:49.828 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:49.828 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.N7sKKX2yEW 00:11:49.828 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73331 00:11:49.828 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73331 00:11:49.828 17:56:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73331 ']' 00:11:49.828 17:56:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.828 17:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:49.828 17:56:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.828 17:56:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.828 17:56:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.828 17:56:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.828 [2024-11-26 17:56:31.365988] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:11:49.828 [2024-11-26 17:56:31.366145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73331 ] 00:11:49.828 [2024-11-26 17:56:31.548700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.087 [2024-11-26 17:56:31.688171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.087 [2024-11-26 17:56:31.932632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.087 [2024-11-26 17:56:31.932708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.675 BaseBdev1_malloc 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.675 true 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.675 [2024-11-26 17:56:32.357337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:50.675 [2024-11-26 17:56:32.357420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.675 [2024-11-26 17:56:32.357467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:50.675 [2024-11-26 17:56:32.357486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.675 [2024-11-26 17:56:32.360429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.675 [2024-11-26 17:56:32.360495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:50.675 BaseBdev1 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.675 BaseBdev2_malloc 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.675 true 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.675 [2024-11-26 17:56:32.433590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:50.675 [2024-11-26 17:56:32.433674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.675 [2024-11-26 17:56:32.433699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:50.675 [2024-11-26 17:56:32.433712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.675 [2024-11-26 17:56:32.436716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.675 [2024-11-26 17:56:32.436770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:50.675 BaseBdev2 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:50.675 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.676 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.676 BaseBdev3_malloc 00:11:50.676 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.676 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:50.676 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.676 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.676 true 00:11:50.676 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.676 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:50.676 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.676 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.676 [2024-11-26 17:56:32.521104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:50.676 [2024-11-26 17:56:32.521179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.676 [2024-11-26 17:56:32.521208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:50.676 [2024-11-26 17:56:32.521221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.676 [2024-11-26 17:56:32.524144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.676 [2024-11-26 17:56:32.524282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:50.676 BaseBdev3 00:11:50.676 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.676 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.676 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:50.676 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.676 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.935 BaseBdev4_malloc 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.935 true 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.935 [2024-11-26 17:56:32.600506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:50.935 [2024-11-26 17:56:32.600582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.935 [2024-11-26 17:56:32.600609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:50.935 [2024-11-26 17:56:32.600622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.935 [2024-11-26 17:56:32.603195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.935 [2024-11-26 17:56:32.603303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:50.935 BaseBdev4 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.935 [2024-11-26 17:56:32.612567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.935 [2024-11-26 17:56:32.614813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.935 [2024-11-26 17:56:32.614990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:50.935 [2024-11-26 17:56:32.615106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:50.935 [2024-11-26 17:56:32.615432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:50.935 [2024-11-26 17:56:32.615453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:50.935 [2024-11-26 17:56:32.615788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:50.935 [2024-11-26 17:56:32.615981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:50.935 [2024-11-26 17:56:32.615993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:50.935 [2024-11-26 17:56:32.616215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.935 "name": "raid_bdev1", 00:11:50.935 "uuid": "a0e2c57b-a3bc-4bfc-b303-93280377d02a", 00:11:50.935 "strip_size_kb": 64, 00:11:50.935 "state": "online", 00:11:50.935 "raid_level": "concat", 00:11:50.935 "superblock": true, 00:11:50.935 "num_base_bdevs": 4, 00:11:50.935 "num_base_bdevs_discovered": 4, 00:11:50.935 "num_base_bdevs_operational": 4, 00:11:50.935 "base_bdevs_list": [ 00:11:50.935 { 00:11:50.935 "name": "BaseBdev1", 00:11:50.935 "uuid": "f98d8b07-e224-5585-b36f-1fbfde5cb619", 00:11:50.935 "is_configured": true, 00:11:50.935 "data_offset": 2048, 00:11:50.935 "data_size": 63488 00:11:50.935 }, 00:11:50.935 { 00:11:50.935 "name": "BaseBdev2", 00:11:50.935 "uuid": "c2e03e04-800a-5a3d-891b-7962cadf2e0c", 00:11:50.935 "is_configured": true, 00:11:50.935 "data_offset": 2048, 00:11:50.935 "data_size": 63488 00:11:50.935 }, 00:11:50.935 { 00:11:50.935 "name": "BaseBdev3", 00:11:50.935 "uuid": "91059571-1cf9-5244-b94d-7369b4c080e9", 00:11:50.935 "is_configured": true, 00:11:50.935 "data_offset": 2048, 00:11:50.935 "data_size": 63488 00:11:50.935 }, 00:11:50.935 { 00:11:50.935 "name": "BaseBdev4", 00:11:50.935 "uuid": "7c7cafd9-f977-54e1-9d6d-acb47ea9db58", 00:11:50.935 "is_configured": true, 00:11:50.935 "data_offset": 2048, 00:11:50.935 "data_size": 63488 00:11:50.935 } 00:11:50.935 ] 00:11:50.935 }' 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.935 17:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.507 17:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:51.507 17:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:51.507 [2024-11-26 17:56:33.177460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.450 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.450 "name": "raid_bdev1", 00:11:52.450 "uuid": "a0e2c57b-a3bc-4bfc-b303-93280377d02a", 00:11:52.450 "strip_size_kb": 64, 00:11:52.450 "state": "online", 00:11:52.451 "raid_level": "concat", 00:11:52.451 "superblock": true, 00:11:52.451 "num_base_bdevs": 4, 00:11:52.451 "num_base_bdevs_discovered": 4, 00:11:52.451 "num_base_bdevs_operational": 4, 00:11:52.451 "base_bdevs_list": [ 00:11:52.451 { 00:11:52.451 "name": "BaseBdev1", 00:11:52.451 "uuid": "f98d8b07-e224-5585-b36f-1fbfde5cb619", 00:11:52.451 "is_configured": true, 00:11:52.451 "data_offset": 2048, 00:11:52.451 "data_size": 63488 00:11:52.451 }, 00:11:52.451 { 00:11:52.451 "name": "BaseBdev2", 00:11:52.451 "uuid": "c2e03e04-800a-5a3d-891b-7962cadf2e0c", 00:11:52.451 "is_configured": true, 00:11:52.451 "data_offset": 2048, 00:11:52.451 "data_size": 63488 00:11:52.451 }, 00:11:52.451 { 00:11:52.451 "name": "BaseBdev3", 00:11:52.451 "uuid": "91059571-1cf9-5244-b94d-7369b4c080e9", 00:11:52.451 "is_configured": true, 00:11:52.451 "data_offset": 2048, 00:11:52.451 "data_size": 63488 00:11:52.451 }, 00:11:52.451 { 00:11:52.451 "name": "BaseBdev4", 00:11:52.451 "uuid": "7c7cafd9-f977-54e1-9d6d-acb47ea9db58", 00:11:52.451 "is_configured": true, 00:11:52.451 "data_offset": 2048, 00:11:52.451 "data_size": 63488 00:11:52.451 } 00:11:52.451 ] 00:11:52.451 }' 00:11:52.451 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.451 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.709 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:52.709 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.709 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.709 [2024-11-26 17:56:34.506768] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.709 [2024-11-26 17:56:34.506809] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.709 [2024-11-26 17:56:34.510102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.709 [2024-11-26 17:56:34.510174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.709 [2024-11-26 17:56:34.510224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.709 [2024-11-26 17:56:34.510238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:52.709 { 00:11:52.709 "results": [ 00:11:52.709 { 00:11:52.709 "job": "raid_bdev1", 00:11:52.709 "core_mask": "0x1", 00:11:52.709 "workload": "randrw", 00:11:52.709 "percentage": 50, 00:11:52.709 "status": "finished", 00:11:52.709 "queue_depth": 1, 00:11:52.709 "io_size": 131072, 00:11:52.709 "runtime": 1.329762, 00:11:52.709 "iops": 12754.914037248771, 00:11:52.709 "mibps": 1594.3642546560964, 00:11:52.709 "io_failed": 1, 00:11:52.709 "io_timeout": 0, 00:11:52.709 "avg_latency_us": 108.15901107484544, 00:11:52.709 "min_latency_us": 34.65502183406114, 00:11:52.709 "max_latency_us": 1752.8733624454148 00:11:52.709 } 00:11:52.709 ], 00:11:52.709 "core_count": 1 00:11:52.709 } 00:11:52.709 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.709 17:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73331 00:11:52.709 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73331 ']' 00:11:52.709 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73331 00:11:52.709 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:52.709 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.709 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73331 00:11:52.709 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.709 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.709 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73331' 00:11:52.709 killing process with pid 73331 00:11:52.709 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73331 00:11:52.709 [2024-11-26 17:56:34.549471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:52.709 17:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73331 00:11:53.277 [2024-11-26 17:56:34.953347] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.652 17:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.N7sKKX2yEW 00:11:54.652 17:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:54.652 17:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:54.652 17:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:54.652 17:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:54.652 17:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.652 17:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:54.652 17:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:54.653 00:11:54.653 real 0m5.157s 00:11:54.653 user 0m6.052s 00:11:54.653 sys 0m0.596s 00:11:54.653 ************************************ 00:11:54.653 END TEST raid_write_error_test 00:11:54.653 ************************************ 00:11:54.653 17:56:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.653 17:56:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.653 17:56:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:54.653 17:56:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:54.653 17:56:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:54.653 17:56:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.653 17:56:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.653 ************************************ 00:11:54.653 START TEST raid_state_function_test 00:11:54.653 ************************************ 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73479 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73479' 00:11:54.653 Process raid pid: 73479 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73479 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73479 ']' 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.653 17:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.912 [2024-11-26 17:56:36.575553] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:11:54.912 [2024-11-26 17:56:36.575743] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.912 [2024-11-26 17:56:36.740417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.171 [2024-11-26 17:56:36.882612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.431 [2024-11-26 17:56:37.131112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.431 [2024-11-26 17:56:37.131265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.691 [2024-11-26 17:56:37.497334] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.691 [2024-11-26 17:56:37.497451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.691 [2024-11-26 17:56:37.497469] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.691 [2024-11-26 17:56:37.497481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.691 [2024-11-26 17:56:37.497489] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:55.691 [2024-11-26 17:56:37.497500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:55.691 [2024-11-26 17:56:37.497514] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:55.691 [2024-11-26 17:56:37.497525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.691 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.949 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.949 "name": "Existed_Raid", 00:11:55.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.949 "strip_size_kb": 0, 00:11:55.950 "state": "configuring", 00:11:55.950 "raid_level": "raid1", 00:11:55.950 "superblock": false, 00:11:55.950 "num_base_bdevs": 4, 00:11:55.950 "num_base_bdevs_discovered": 0, 00:11:55.950 "num_base_bdevs_operational": 4, 00:11:55.950 "base_bdevs_list": [ 00:11:55.950 { 00:11:55.950 "name": "BaseBdev1", 00:11:55.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.950 "is_configured": false, 00:11:55.950 "data_offset": 0, 00:11:55.950 "data_size": 0 00:11:55.950 }, 00:11:55.950 { 00:11:55.950 "name": "BaseBdev2", 00:11:55.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.950 "is_configured": false, 00:11:55.950 "data_offset": 0, 00:11:55.950 "data_size": 0 00:11:55.950 }, 00:11:55.950 { 00:11:55.950 "name": "BaseBdev3", 00:11:55.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.950 "is_configured": false, 00:11:55.950 "data_offset": 0, 00:11:55.950 "data_size": 0 00:11:55.950 }, 00:11:55.950 { 00:11:55.950 "name": "BaseBdev4", 00:11:55.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.950 "is_configured": false, 00:11:55.950 "data_offset": 0, 00:11:55.950 "data_size": 0 00:11:55.950 } 00:11:55.950 ] 00:11:55.950 }' 00:11:55.950 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.950 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.209 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.209 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.209 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.209 [2024-11-26 17:56:37.937309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.209 [2024-11-26 17:56:37.937421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:56.209 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.209 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:56.209 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.209 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.209 [2024-11-26 17:56:37.949336] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.209 [2024-11-26 17:56:37.949452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.209 [2024-11-26 17:56:37.949496] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.209 [2024-11-26 17:56:37.949535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.209 [2024-11-26 17:56:37.949576] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.209 [2024-11-26 17:56:37.949603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.209 [2024-11-26 17:56:37.949663] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:56.209 [2024-11-26 17:56:37.949690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:56.209 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.209 17:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:56.209 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.209 17:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.209 [2024-11-26 17:56:38.000918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.209 BaseBdev1 00:11:56.209 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.209 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:56.209 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:56.209 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.209 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:56.209 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.209 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.209 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.209 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.209 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.209 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.209 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:56.209 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.209 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.209 [ 00:11:56.209 { 00:11:56.209 "name": "BaseBdev1", 00:11:56.209 "aliases": [ 00:11:56.209 "e024bf5d-9353-4f0d-badb-65455ad7fc59" 00:11:56.209 ], 00:11:56.209 "product_name": "Malloc disk", 00:11:56.209 "block_size": 512, 00:11:56.209 "num_blocks": 65536, 00:11:56.209 "uuid": "e024bf5d-9353-4f0d-badb-65455ad7fc59", 00:11:56.209 "assigned_rate_limits": { 00:11:56.209 "rw_ios_per_sec": 0, 00:11:56.209 "rw_mbytes_per_sec": 0, 00:11:56.209 "r_mbytes_per_sec": 0, 00:11:56.209 "w_mbytes_per_sec": 0 00:11:56.209 }, 00:11:56.209 "claimed": true, 00:11:56.209 "claim_type": "exclusive_write", 00:11:56.209 "zoned": false, 00:11:56.209 "supported_io_types": { 00:11:56.209 "read": true, 00:11:56.209 "write": true, 00:11:56.209 "unmap": true, 00:11:56.209 "flush": true, 00:11:56.209 "reset": true, 00:11:56.209 "nvme_admin": false, 00:11:56.209 "nvme_io": false, 00:11:56.209 "nvme_io_md": false, 00:11:56.209 "write_zeroes": true, 00:11:56.209 "zcopy": true, 00:11:56.210 "get_zone_info": false, 00:11:56.210 "zone_management": false, 00:11:56.210 "zone_append": false, 00:11:56.210 "compare": false, 00:11:56.210 "compare_and_write": false, 00:11:56.210 "abort": true, 00:11:56.210 "seek_hole": false, 00:11:56.210 "seek_data": false, 00:11:56.210 "copy": true, 00:11:56.210 "nvme_iov_md": false 00:11:56.210 }, 00:11:56.210 "memory_domains": [ 00:11:56.210 { 00:11:56.210 "dma_device_id": "system", 00:11:56.210 "dma_device_type": 1 00:11:56.210 }, 00:11:56.210 { 00:11:56.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.210 "dma_device_type": 2 00:11:56.210 } 00:11:56.210 ], 00:11:56.210 "driver_specific": {} 00:11:56.210 } 00:11:56.210 ] 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.210 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.468 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.468 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.468 "name": "Existed_Raid", 00:11:56.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.468 "strip_size_kb": 0, 00:11:56.468 "state": "configuring", 00:11:56.468 "raid_level": "raid1", 00:11:56.468 "superblock": false, 00:11:56.468 "num_base_bdevs": 4, 00:11:56.468 "num_base_bdevs_discovered": 1, 00:11:56.468 "num_base_bdevs_operational": 4, 00:11:56.468 "base_bdevs_list": [ 00:11:56.468 { 00:11:56.468 "name": "BaseBdev1", 00:11:56.468 "uuid": "e024bf5d-9353-4f0d-badb-65455ad7fc59", 00:11:56.468 "is_configured": true, 00:11:56.468 "data_offset": 0, 00:11:56.468 "data_size": 65536 00:11:56.468 }, 00:11:56.468 { 00:11:56.468 "name": "BaseBdev2", 00:11:56.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.468 "is_configured": false, 00:11:56.468 "data_offset": 0, 00:11:56.468 "data_size": 0 00:11:56.468 }, 00:11:56.468 { 00:11:56.468 "name": "BaseBdev3", 00:11:56.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.468 "is_configured": false, 00:11:56.468 "data_offset": 0, 00:11:56.468 "data_size": 0 00:11:56.468 }, 00:11:56.468 { 00:11:56.468 "name": "BaseBdev4", 00:11:56.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.469 "is_configured": false, 00:11:56.469 "data_offset": 0, 00:11:56.469 "data_size": 0 00:11:56.469 } 00:11:56.469 ] 00:11:56.469 }' 00:11:56.469 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.469 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.728 [2024-11-26 17:56:38.464201] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.728 [2024-11-26 17:56:38.464330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.728 [2024-11-26 17:56:38.476251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.728 [2024-11-26 17:56:38.478487] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.728 [2024-11-26 17:56:38.478584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.728 [2024-11-26 17:56:38.478636] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.728 [2024-11-26 17:56:38.478668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.728 [2024-11-26 17:56:38.478697] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:56.728 [2024-11-26 17:56:38.478725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.728 "name": "Existed_Raid", 00:11:56.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.728 "strip_size_kb": 0, 00:11:56.728 "state": "configuring", 00:11:56.728 "raid_level": "raid1", 00:11:56.728 "superblock": false, 00:11:56.728 "num_base_bdevs": 4, 00:11:56.728 "num_base_bdevs_discovered": 1, 00:11:56.728 "num_base_bdevs_operational": 4, 00:11:56.728 "base_bdevs_list": [ 00:11:56.728 { 00:11:56.728 "name": "BaseBdev1", 00:11:56.728 "uuid": "e024bf5d-9353-4f0d-badb-65455ad7fc59", 00:11:56.728 "is_configured": true, 00:11:56.728 "data_offset": 0, 00:11:56.728 "data_size": 65536 00:11:56.728 }, 00:11:56.728 { 00:11:56.728 "name": "BaseBdev2", 00:11:56.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.728 "is_configured": false, 00:11:56.728 "data_offset": 0, 00:11:56.728 "data_size": 0 00:11:56.728 }, 00:11:56.728 { 00:11:56.728 "name": "BaseBdev3", 00:11:56.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.728 "is_configured": false, 00:11:56.728 "data_offset": 0, 00:11:56.728 "data_size": 0 00:11:56.728 }, 00:11:56.728 { 00:11:56.728 "name": "BaseBdev4", 00:11:56.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.728 "is_configured": false, 00:11:56.728 "data_offset": 0, 00:11:56.728 "data_size": 0 00:11:56.728 } 00:11:56.728 ] 00:11:56.728 }' 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.728 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.296 17:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:57.296 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.296 17:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.296 [2024-11-26 17:56:39.010478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.296 BaseBdev2 00:11:57.296 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.296 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:57.296 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:57.296 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.296 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:57.296 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.296 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.296 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.296 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.296 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.296 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.296 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:57.296 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.296 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.296 [ 00:11:57.296 { 00:11:57.296 "name": "BaseBdev2", 00:11:57.296 "aliases": [ 00:11:57.296 "2f1a6476-bfac-45e6-92cc-de5abbf3dae5" 00:11:57.296 ], 00:11:57.296 "product_name": "Malloc disk", 00:11:57.296 "block_size": 512, 00:11:57.296 "num_blocks": 65536, 00:11:57.296 "uuid": "2f1a6476-bfac-45e6-92cc-de5abbf3dae5", 00:11:57.296 "assigned_rate_limits": { 00:11:57.296 "rw_ios_per_sec": 0, 00:11:57.296 "rw_mbytes_per_sec": 0, 00:11:57.296 "r_mbytes_per_sec": 0, 00:11:57.296 "w_mbytes_per_sec": 0 00:11:57.296 }, 00:11:57.296 "claimed": true, 00:11:57.296 "claim_type": "exclusive_write", 00:11:57.296 "zoned": false, 00:11:57.296 "supported_io_types": { 00:11:57.296 "read": true, 00:11:57.296 "write": true, 00:11:57.296 "unmap": true, 00:11:57.296 "flush": true, 00:11:57.296 "reset": true, 00:11:57.296 "nvme_admin": false, 00:11:57.296 "nvme_io": false, 00:11:57.296 "nvme_io_md": false, 00:11:57.296 "write_zeroes": true, 00:11:57.296 "zcopy": true, 00:11:57.296 "get_zone_info": false, 00:11:57.296 "zone_management": false, 00:11:57.296 "zone_append": false, 00:11:57.296 "compare": false, 00:11:57.296 "compare_and_write": false, 00:11:57.296 "abort": true, 00:11:57.297 "seek_hole": false, 00:11:57.297 "seek_data": false, 00:11:57.297 "copy": true, 00:11:57.297 "nvme_iov_md": false 00:11:57.297 }, 00:11:57.297 "memory_domains": [ 00:11:57.297 { 00:11:57.297 "dma_device_id": "system", 00:11:57.297 "dma_device_type": 1 00:11:57.297 }, 00:11:57.297 { 00:11:57.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.297 "dma_device_type": 2 00:11:57.297 } 00:11:57.297 ], 00:11:57.297 "driver_specific": {} 00:11:57.297 } 00:11:57.297 ] 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.297 "name": "Existed_Raid", 00:11:57.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.297 "strip_size_kb": 0, 00:11:57.297 "state": "configuring", 00:11:57.297 "raid_level": "raid1", 00:11:57.297 "superblock": false, 00:11:57.297 "num_base_bdevs": 4, 00:11:57.297 "num_base_bdevs_discovered": 2, 00:11:57.297 "num_base_bdevs_operational": 4, 00:11:57.297 "base_bdevs_list": [ 00:11:57.297 { 00:11:57.297 "name": "BaseBdev1", 00:11:57.297 "uuid": "e024bf5d-9353-4f0d-badb-65455ad7fc59", 00:11:57.297 "is_configured": true, 00:11:57.297 "data_offset": 0, 00:11:57.297 "data_size": 65536 00:11:57.297 }, 00:11:57.297 { 00:11:57.297 "name": "BaseBdev2", 00:11:57.297 "uuid": "2f1a6476-bfac-45e6-92cc-de5abbf3dae5", 00:11:57.297 "is_configured": true, 00:11:57.297 "data_offset": 0, 00:11:57.297 "data_size": 65536 00:11:57.297 }, 00:11:57.297 { 00:11:57.297 "name": "BaseBdev3", 00:11:57.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.297 "is_configured": false, 00:11:57.297 "data_offset": 0, 00:11:57.297 "data_size": 0 00:11:57.297 }, 00:11:57.297 { 00:11:57.297 "name": "BaseBdev4", 00:11:57.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.297 "is_configured": false, 00:11:57.297 "data_offset": 0, 00:11:57.297 "data_size": 0 00:11:57.297 } 00:11:57.297 ] 00:11:57.297 }' 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.297 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.864 [2024-11-26 17:56:39.572476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.864 BaseBdev3 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.864 [ 00:11:57.864 { 00:11:57.864 "name": "BaseBdev3", 00:11:57.864 "aliases": [ 00:11:57.864 "736e301f-e061-42a6-b595-6f91c3735163" 00:11:57.864 ], 00:11:57.864 "product_name": "Malloc disk", 00:11:57.864 "block_size": 512, 00:11:57.864 "num_blocks": 65536, 00:11:57.864 "uuid": "736e301f-e061-42a6-b595-6f91c3735163", 00:11:57.864 "assigned_rate_limits": { 00:11:57.864 "rw_ios_per_sec": 0, 00:11:57.864 "rw_mbytes_per_sec": 0, 00:11:57.864 "r_mbytes_per_sec": 0, 00:11:57.864 "w_mbytes_per_sec": 0 00:11:57.864 }, 00:11:57.864 "claimed": true, 00:11:57.864 "claim_type": "exclusive_write", 00:11:57.864 "zoned": false, 00:11:57.864 "supported_io_types": { 00:11:57.864 "read": true, 00:11:57.864 "write": true, 00:11:57.864 "unmap": true, 00:11:57.864 "flush": true, 00:11:57.864 "reset": true, 00:11:57.864 "nvme_admin": false, 00:11:57.864 "nvme_io": false, 00:11:57.864 "nvme_io_md": false, 00:11:57.864 "write_zeroes": true, 00:11:57.864 "zcopy": true, 00:11:57.864 "get_zone_info": false, 00:11:57.864 "zone_management": false, 00:11:57.864 "zone_append": false, 00:11:57.864 "compare": false, 00:11:57.864 "compare_and_write": false, 00:11:57.864 "abort": true, 00:11:57.864 "seek_hole": false, 00:11:57.864 "seek_data": false, 00:11:57.864 "copy": true, 00:11:57.864 "nvme_iov_md": false 00:11:57.864 }, 00:11:57.864 "memory_domains": [ 00:11:57.864 { 00:11:57.864 "dma_device_id": "system", 00:11:57.864 "dma_device_type": 1 00:11:57.864 }, 00:11:57.864 { 00:11:57.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.864 "dma_device_type": 2 00:11:57.864 } 00:11:57.864 ], 00:11:57.864 "driver_specific": {} 00:11:57.864 } 00:11:57.864 ] 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.864 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.864 "name": "Existed_Raid", 00:11:57.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.864 "strip_size_kb": 0, 00:11:57.864 "state": "configuring", 00:11:57.864 "raid_level": "raid1", 00:11:57.864 "superblock": false, 00:11:57.864 "num_base_bdevs": 4, 00:11:57.864 "num_base_bdevs_discovered": 3, 00:11:57.864 "num_base_bdevs_operational": 4, 00:11:57.864 "base_bdevs_list": [ 00:11:57.864 { 00:11:57.864 "name": "BaseBdev1", 00:11:57.864 "uuid": "e024bf5d-9353-4f0d-badb-65455ad7fc59", 00:11:57.864 "is_configured": true, 00:11:57.865 "data_offset": 0, 00:11:57.865 "data_size": 65536 00:11:57.865 }, 00:11:57.865 { 00:11:57.865 "name": "BaseBdev2", 00:11:57.865 "uuid": "2f1a6476-bfac-45e6-92cc-de5abbf3dae5", 00:11:57.865 "is_configured": true, 00:11:57.865 "data_offset": 0, 00:11:57.865 "data_size": 65536 00:11:57.865 }, 00:11:57.865 { 00:11:57.865 "name": "BaseBdev3", 00:11:57.865 "uuid": "736e301f-e061-42a6-b595-6f91c3735163", 00:11:57.865 "is_configured": true, 00:11:57.865 "data_offset": 0, 00:11:57.865 "data_size": 65536 00:11:57.865 }, 00:11:57.865 { 00:11:57.865 "name": "BaseBdev4", 00:11:57.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.865 "is_configured": false, 00:11:57.865 "data_offset": 0, 00:11:57.865 "data_size": 0 00:11:57.865 } 00:11:57.865 ] 00:11:57.865 }' 00:11:57.865 17:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.865 17:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.432 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:58.432 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.432 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.432 [2024-11-26 17:56:40.119337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:58.432 [2024-11-26 17:56:40.119495] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:58.432 [2024-11-26 17:56:40.119511] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:58.432 [2024-11-26 17:56:40.119842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:58.432 [2024-11-26 17:56:40.120061] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:58.432 [2024-11-26 17:56:40.120082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:58.432 [2024-11-26 17:56:40.120391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.432 BaseBdev4 00:11:58.432 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.432 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:58.432 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:58.432 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.432 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.432 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.432 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.433 [ 00:11:58.433 { 00:11:58.433 "name": "BaseBdev4", 00:11:58.433 "aliases": [ 00:11:58.433 "5729b463-400b-4daf-b3ec-316ddccb6522" 00:11:58.433 ], 00:11:58.433 "product_name": "Malloc disk", 00:11:58.433 "block_size": 512, 00:11:58.433 "num_blocks": 65536, 00:11:58.433 "uuid": "5729b463-400b-4daf-b3ec-316ddccb6522", 00:11:58.433 "assigned_rate_limits": { 00:11:58.433 "rw_ios_per_sec": 0, 00:11:58.433 "rw_mbytes_per_sec": 0, 00:11:58.433 "r_mbytes_per_sec": 0, 00:11:58.433 "w_mbytes_per_sec": 0 00:11:58.433 }, 00:11:58.433 "claimed": true, 00:11:58.433 "claim_type": "exclusive_write", 00:11:58.433 "zoned": false, 00:11:58.433 "supported_io_types": { 00:11:58.433 "read": true, 00:11:58.433 "write": true, 00:11:58.433 "unmap": true, 00:11:58.433 "flush": true, 00:11:58.433 "reset": true, 00:11:58.433 "nvme_admin": false, 00:11:58.433 "nvme_io": false, 00:11:58.433 "nvme_io_md": false, 00:11:58.433 "write_zeroes": true, 00:11:58.433 "zcopy": true, 00:11:58.433 "get_zone_info": false, 00:11:58.433 "zone_management": false, 00:11:58.433 "zone_append": false, 00:11:58.433 "compare": false, 00:11:58.433 "compare_and_write": false, 00:11:58.433 "abort": true, 00:11:58.433 "seek_hole": false, 00:11:58.433 "seek_data": false, 00:11:58.433 "copy": true, 00:11:58.433 "nvme_iov_md": false 00:11:58.433 }, 00:11:58.433 "memory_domains": [ 00:11:58.433 { 00:11:58.433 "dma_device_id": "system", 00:11:58.433 "dma_device_type": 1 00:11:58.433 }, 00:11:58.433 { 00:11:58.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.433 "dma_device_type": 2 00:11:58.433 } 00:11:58.433 ], 00:11:58.433 "driver_specific": {} 00:11:58.433 } 00:11:58.433 ] 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.433 "name": "Existed_Raid", 00:11:58.433 "uuid": "f0aa9e97-f263-4ba3-a01d-80dceb2138e8", 00:11:58.433 "strip_size_kb": 0, 00:11:58.433 "state": "online", 00:11:58.433 "raid_level": "raid1", 00:11:58.433 "superblock": false, 00:11:58.433 "num_base_bdevs": 4, 00:11:58.433 "num_base_bdevs_discovered": 4, 00:11:58.433 "num_base_bdevs_operational": 4, 00:11:58.433 "base_bdevs_list": [ 00:11:58.433 { 00:11:58.433 "name": "BaseBdev1", 00:11:58.433 "uuid": "e024bf5d-9353-4f0d-badb-65455ad7fc59", 00:11:58.433 "is_configured": true, 00:11:58.433 "data_offset": 0, 00:11:58.433 "data_size": 65536 00:11:58.433 }, 00:11:58.433 { 00:11:58.433 "name": "BaseBdev2", 00:11:58.433 "uuid": "2f1a6476-bfac-45e6-92cc-de5abbf3dae5", 00:11:58.433 "is_configured": true, 00:11:58.433 "data_offset": 0, 00:11:58.433 "data_size": 65536 00:11:58.433 }, 00:11:58.433 { 00:11:58.433 "name": "BaseBdev3", 00:11:58.433 "uuid": "736e301f-e061-42a6-b595-6f91c3735163", 00:11:58.433 "is_configured": true, 00:11:58.433 "data_offset": 0, 00:11:58.433 "data_size": 65536 00:11:58.433 }, 00:11:58.433 { 00:11:58.433 "name": "BaseBdev4", 00:11:58.433 "uuid": "5729b463-400b-4daf-b3ec-316ddccb6522", 00:11:58.433 "is_configured": true, 00:11:58.433 "data_offset": 0, 00:11:58.433 "data_size": 65536 00:11:58.433 } 00:11:58.433 ] 00:11:58.433 }' 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.433 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.001 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:59.001 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:59.001 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.001 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.001 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.001 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.001 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:59.001 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.001 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.001 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.001 [2024-11-26 17:56:40.615059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.001 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.001 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:59.001 "name": "Existed_Raid", 00:11:59.001 "aliases": [ 00:11:59.001 "f0aa9e97-f263-4ba3-a01d-80dceb2138e8" 00:11:59.001 ], 00:11:59.001 "product_name": "Raid Volume", 00:11:59.001 "block_size": 512, 00:11:59.001 "num_blocks": 65536, 00:11:59.002 "uuid": "f0aa9e97-f263-4ba3-a01d-80dceb2138e8", 00:11:59.002 "assigned_rate_limits": { 00:11:59.002 "rw_ios_per_sec": 0, 00:11:59.002 "rw_mbytes_per_sec": 0, 00:11:59.002 "r_mbytes_per_sec": 0, 00:11:59.002 "w_mbytes_per_sec": 0 00:11:59.002 }, 00:11:59.002 "claimed": false, 00:11:59.002 "zoned": false, 00:11:59.002 "supported_io_types": { 00:11:59.002 "read": true, 00:11:59.002 "write": true, 00:11:59.002 "unmap": false, 00:11:59.002 "flush": false, 00:11:59.002 "reset": true, 00:11:59.002 "nvme_admin": false, 00:11:59.002 "nvme_io": false, 00:11:59.002 "nvme_io_md": false, 00:11:59.002 "write_zeroes": true, 00:11:59.002 "zcopy": false, 00:11:59.002 "get_zone_info": false, 00:11:59.002 "zone_management": false, 00:11:59.002 "zone_append": false, 00:11:59.002 "compare": false, 00:11:59.002 "compare_and_write": false, 00:11:59.002 "abort": false, 00:11:59.002 "seek_hole": false, 00:11:59.002 "seek_data": false, 00:11:59.002 "copy": false, 00:11:59.002 "nvme_iov_md": false 00:11:59.002 }, 00:11:59.002 "memory_domains": [ 00:11:59.002 { 00:11:59.002 "dma_device_id": "system", 00:11:59.002 "dma_device_type": 1 00:11:59.002 }, 00:11:59.002 { 00:11:59.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.002 "dma_device_type": 2 00:11:59.002 }, 00:11:59.002 { 00:11:59.002 "dma_device_id": "system", 00:11:59.002 "dma_device_type": 1 00:11:59.002 }, 00:11:59.002 { 00:11:59.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.002 "dma_device_type": 2 00:11:59.002 }, 00:11:59.002 { 00:11:59.002 "dma_device_id": "system", 00:11:59.002 "dma_device_type": 1 00:11:59.002 }, 00:11:59.002 { 00:11:59.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.002 "dma_device_type": 2 00:11:59.002 }, 00:11:59.002 { 00:11:59.002 "dma_device_id": "system", 00:11:59.002 "dma_device_type": 1 00:11:59.002 }, 00:11:59.002 { 00:11:59.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.002 "dma_device_type": 2 00:11:59.002 } 00:11:59.002 ], 00:11:59.002 "driver_specific": { 00:11:59.002 "raid": { 00:11:59.002 "uuid": "f0aa9e97-f263-4ba3-a01d-80dceb2138e8", 00:11:59.002 "strip_size_kb": 0, 00:11:59.002 "state": "online", 00:11:59.002 "raid_level": "raid1", 00:11:59.002 "superblock": false, 00:11:59.002 "num_base_bdevs": 4, 00:11:59.002 "num_base_bdevs_discovered": 4, 00:11:59.002 "num_base_bdevs_operational": 4, 00:11:59.002 "base_bdevs_list": [ 00:11:59.002 { 00:11:59.002 "name": "BaseBdev1", 00:11:59.002 "uuid": "e024bf5d-9353-4f0d-badb-65455ad7fc59", 00:11:59.002 "is_configured": true, 00:11:59.002 "data_offset": 0, 00:11:59.002 "data_size": 65536 00:11:59.002 }, 00:11:59.002 { 00:11:59.002 "name": "BaseBdev2", 00:11:59.002 "uuid": "2f1a6476-bfac-45e6-92cc-de5abbf3dae5", 00:11:59.002 "is_configured": true, 00:11:59.002 "data_offset": 0, 00:11:59.002 "data_size": 65536 00:11:59.002 }, 00:11:59.002 { 00:11:59.002 "name": "BaseBdev3", 00:11:59.002 "uuid": "736e301f-e061-42a6-b595-6f91c3735163", 00:11:59.002 "is_configured": true, 00:11:59.002 "data_offset": 0, 00:11:59.002 "data_size": 65536 00:11:59.002 }, 00:11:59.002 { 00:11:59.002 "name": "BaseBdev4", 00:11:59.002 "uuid": "5729b463-400b-4daf-b3ec-316ddccb6522", 00:11:59.002 "is_configured": true, 00:11:59.002 "data_offset": 0, 00:11:59.002 "data_size": 65536 00:11:59.002 } 00:11:59.002 ] 00:11:59.002 } 00:11:59.002 } 00:11:59.002 }' 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:59.002 BaseBdev2 00:11:59.002 BaseBdev3 00:11:59.002 BaseBdev4' 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.002 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.262 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.262 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.262 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.262 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:59.262 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.262 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.262 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.262 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.262 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.262 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.262 17:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:59.262 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.262 17:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.262 [2024-11-26 17:56:40.942248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.262 "name": "Existed_Raid", 00:11:59.262 "uuid": "f0aa9e97-f263-4ba3-a01d-80dceb2138e8", 00:11:59.262 "strip_size_kb": 0, 00:11:59.262 "state": "online", 00:11:59.262 "raid_level": "raid1", 00:11:59.262 "superblock": false, 00:11:59.262 "num_base_bdevs": 4, 00:11:59.262 "num_base_bdevs_discovered": 3, 00:11:59.262 "num_base_bdevs_operational": 3, 00:11:59.262 "base_bdevs_list": [ 00:11:59.262 { 00:11:59.262 "name": null, 00:11:59.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.262 "is_configured": false, 00:11:59.262 "data_offset": 0, 00:11:59.262 "data_size": 65536 00:11:59.262 }, 00:11:59.262 { 00:11:59.262 "name": "BaseBdev2", 00:11:59.262 "uuid": "2f1a6476-bfac-45e6-92cc-de5abbf3dae5", 00:11:59.262 "is_configured": true, 00:11:59.262 "data_offset": 0, 00:11:59.262 "data_size": 65536 00:11:59.262 }, 00:11:59.262 { 00:11:59.262 "name": "BaseBdev3", 00:11:59.262 "uuid": "736e301f-e061-42a6-b595-6f91c3735163", 00:11:59.262 "is_configured": true, 00:11:59.262 "data_offset": 0, 00:11:59.262 "data_size": 65536 00:11:59.262 }, 00:11:59.262 { 00:11:59.262 "name": "BaseBdev4", 00:11:59.262 "uuid": "5729b463-400b-4daf-b3ec-316ddccb6522", 00:11:59.262 "is_configured": true, 00:11:59.262 "data_offset": 0, 00:11:59.262 "data_size": 65536 00:11:59.262 } 00:11:59.262 ] 00:11:59.262 }' 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.262 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.831 [2024-11-26 17:56:41.564512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.831 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.090 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.090 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:00.090 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.090 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:00.090 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.090 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.090 [2024-11-26 17:56:41.740931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:00.090 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.090 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.090 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.090 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.090 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:00.090 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.090 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.090 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.090 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:00.091 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.091 17:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:00.091 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.091 17:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.091 [2024-11-26 17:56:41.919313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:00.091 [2024-11-26 17:56:41.919429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.350 [2024-11-26 17:56:42.035817] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.350 [2024-11-26 17:56:42.035886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.350 [2024-11-26 17:56:42.035900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:00.350 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.350 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.350 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.350 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.350 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:00.350 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.350 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.350 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.350 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.351 BaseBdev2 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.351 [ 00:12:00.351 { 00:12:00.351 "name": "BaseBdev2", 00:12:00.351 "aliases": [ 00:12:00.351 "ef8636b6-f003-4dcc-9ea1-4c570827a8ea" 00:12:00.351 ], 00:12:00.351 "product_name": "Malloc disk", 00:12:00.351 "block_size": 512, 00:12:00.351 "num_blocks": 65536, 00:12:00.351 "uuid": "ef8636b6-f003-4dcc-9ea1-4c570827a8ea", 00:12:00.351 "assigned_rate_limits": { 00:12:00.351 "rw_ios_per_sec": 0, 00:12:00.351 "rw_mbytes_per_sec": 0, 00:12:00.351 "r_mbytes_per_sec": 0, 00:12:00.351 "w_mbytes_per_sec": 0 00:12:00.351 }, 00:12:00.351 "claimed": false, 00:12:00.351 "zoned": false, 00:12:00.351 "supported_io_types": { 00:12:00.351 "read": true, 00:12:00.351 "write": true, 00:12:00.351 "unmap": true, 00:12:00.351 "flush": true, 00:12:00.351 "reset": true, 00:12:00.351 "nvme_admin": false, 00:12:00.351 "nvme_io": false, 00:12:00.351 "nvme_io_md": false, 00:12:00.351 "write_zeroes": true, 00:12:00.351 "zcopy": true, 00:12:00.351 "get_zone_info": false, 00:12:00.351 "zone_management": false, 00:12:00.351 "zone_append": false, 00:12:00.351 "compare": false, 00:12:00.351 "compare_and_write": false, 00:12:00.351 "abort": true, 00:12:00.351 "seek_hole": false, 00:12:00.351 "seek_data": false, 00:12:00.351 "copy": true, 00:12:00.351 "nvme_iov_md": false 00:12:00.351 }, 00:12:00.351 "memory_domains": [ 00:12:00.351 { 00:12:00.351 "dma_device_id": "system", 00:12:00.351 "dma_device_type": 1 00:12:00.351 }, 00:12:00.351 { 00:12:00.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.351 "dma_device_type": 2 00:12:00.351 } 00:12:00.351 ], 00:12:00.351 "driver_specific": {} 00:12:00.351 } 00:12:00.351 ] 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.351 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.611 BaseBdev3 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.611 [ 00:12:00.611 { 00:12:00.611 "name": "BaseBdev3", 00:12:00.611 "aliases": [ 00:12:00.611 "11708dfb-2395-457b-afc3-d64d71187fc4" 00:12:00.611 ], 00:12:00.611 "product_name": "Malloc disk", 00:12:00.611 "block_size": 512, 00:12:00.611 "num_blocks": 65536, 00:12:00.611 "uuid": "11708dfb-2395-457b-afc3-d64d71187fc4", 00:12:00.611 "assigned_rate_limits": { 00:12:00.611 "rw_ios_per_sec": 0, 00:12:00.611 "rw_mbytes_per_sec": 0, 00:12:00.611 "r_mbytes_per_sec": 0, 00:12:00.611 "w_mbytes_per_sec": 0 00:12:00.611 }, 00:12:00.611 "claimed": false, 00:12:00.611 "zoned": false, 00:12:00.611 "supported_io_types": { 00:12:00.611 "read": true, 00:12:00.611 "write": true, 00:12:00.611 "unmap": true, 00:12:00.611 "flush": true, 00:12:00.611 "reset": true, 00:12:00.611 "nvme_admin": false, 00:12:00.611 "nvme_io": false, 00:12:00.611 "nvme_io_md": false, 00:12:00.611 "write_zeroes": true, 00:12:00.611 "zcopy": true, 00:12:00.611 "get_zone_info": false, 00:12:00.611 "zone_management": false, 00:12:00.611 "zone_append": false, 00:12:00.611 "compare": false, 00:12:00.611 "compare_and_write": false, 00:12:00.611 "abort": true, 00:12:00.611 "seek_hole": false, 00:12:00.611 "seek_data": false, 00:12:00.611 "copy": true, 00:12:00.611 "nvme_iov_md": false 00:12:00.611 }, 00:12:00.611 "memory_domains": [ 00:12:00.611 { 00:12:00.611 "dma_device_id": "system", 00:12:00.611 "dma_device_type": 1 00:12:00.611 }, 00:12:00.611 { 00:12:00.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.611 "dma_device_type": 2 00:12:00.611 } 00:12:00.611 ], 00:12:00.611 "driver_specific": {} 00:12:00.611 } 00:12:00.611 ] 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.611 BaseBdev4 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.611 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.611 [ 00:12:00.611 { 00:12:00.611 "name": "BaseBdev4", 00:12:00.611 "aliases": [ 00:12:00.611 "e8942d62-da45-4139-8d3f-479955954b75" 00:12:00.611 ], 00:12:00.611 "product_name": "Malloc disk", 00:12:00.611 "block_size": 512, 00:12:00.611 "num_blocks": 65536, 00:12:00.611 "uuid": "e8942d62-da45-4139-8d3f-479955954b75", 00:12:00.611 "assigned_rate_limits": { 00:12:00.611 "rw_ios_per_sec": 0, 00:12:00.611 "rw_mbytes_per_sec": 0, 00:12:00.611 "r_mbytes_per_sec": 0, 00:12:00.611 "w_mbytes_per_sec": 0 00:12:00.611 }, 00:12:00.611 "claimed": false, 00:12:00.611 "zoned": false, 00:12:00.611 "supported_io_types": { 00:12:00.611 "read": true, 00:12:00.611 "write": true, 00:12:00.611 "unmap": true, 00:12:00.611 "flush": true, 00:12:00.611 "reset": true, 00:12:00.611 "nvme_admin": false, 00:12:00.611 "nvme_io": false, 00:12:00.611 "nvme_io_md": false, 00:12:00.611 "write_zeroes": true, 00:12:00.611 "zcopy": true, 00:12:00.611 "get_zone_info": false, 00:12:00.611 "zone_management": false, 00:12:00.611 "zone_append": false, 00:12:00.611 "compare": false, 00:12:00.611 "compare_and_write": false, 00:12:00.611 "abort": true, 00:12:00.611 "seek_hole": false, 00:12:00.611 "seek_data": false, 00:12:00.611 "copy": true, 00:12:00.611 "nvme_iov_md": false 00:12:00.612 }, 00:12:00.612 "memory_domains": [ 00:12:00.612 { 00:12:00.612 "dma_device_id": "system", 00:12:00.612 "dma_device_type": 1 00:12:00.612 }, 00:12:00.612 { 00:12:00.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.612 "dma_device_type": 2 00:12:00.612 } 00:12:00.612 ], 00:12:00.612 "driver_specific": {} 00:12:00.612 } 00:12:00.612 ] 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.612 [2024-11-26 17:56:42.326695] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:00.612 [2024-11-26 17:56:42.326812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:00.612 [2024-11-26 17:56:42.326872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.612 [2024-11-26 17:56:42.329209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.612 [2024-11-26 17:56:42.329350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.612 "name": "Existed_Raid", 00:12:00.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.612 "strip_size_kb": 0, 00:12:00.612 "state": "configuring", 00:12:00.612 "raid_level": "raid1", 00:12:00.612 "superblock": false, 00:12:00.612 "num_base_bdevs": 4, 00:12:00.612 "num_base_bdevs_discovered": 3, 00:12:00.612 "num_base_bdevs_operational": 4, 00:12:00.612 "base_bdevs_list": [ 00:12:00.612 { 00:12:00.612 "name": "BaseBdev1", 00:12:00.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.612 "is_configured": false, 00:12:00.612 "data_offset": 0, 00:12:00.612 "data_size": 0 00:12:00.612 }, 00:12:00.612 { 00:12:00.612 "name": "BaseBdev2", 00:12:00.612 "uuid": "ef8636b6-f003-4dcc-9ea1-4c570827a8ea", 00:12:00.612 "is_configured": true, 00:12:00.612 "data_offset": 0, 00:12:00.612 "data_size": 65536 00:12:00.612 }, 00:12:00.612 { 00:12:00.612 "name": "BaseBdev3", 00:12:00.612 "uuid": "11708dfb-2395-457b-afc3-d64d71187fc4", 00:12:00.612 "is_configured": true, 00:12:00.612 "data_offset": 0, 00:12:00.612 "data_size": 65536 00:12:00.612 }, 00:12:00.612 { 00:12:00.612 "name": "BaseBdev4", 00:12:00.612 "uuid": "e8942d62-da45-4139-8d3f-479955954b75", 00:12:00.612 "is_configured": true, 00:12:00.612 "data_offset": 0, 00:12:00.612 "data_size": 65536 00:12:00.612 } 00:12:00.612 ] 00:12:00.612 }' 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.612 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.181 [2024-11-26 17:56:42.818211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.181 "name": "Existed_Raid", 00:12:01.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.181 "strip_size_kb": 0, 00:12:01.181 "state": "configuring", 00:12:01.181 "raid_level": "raid1", 00:12:01.181 "superblock": false, 00:12:01.181 "num_base_bdevs": 4, 00:12:01.181 "num_base_bdevs_discovered": 2, 00:12:01.181 "num_base_bdevs_operational": 4, 00:12:01.181 "base_bdevs_list": [ 00:12:01.181 { 00:12:01.181 "name": "BaseBdev1", 00:12:01.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.181 "is_configured": false, 00:12:01.181 "data_offset": 0, 00:12:01.181 "data_size": 0 00:12:01.181 }, 00:12:01.181 { 00:12:01.181 "name": null, 00:12:01.181 "uuid": "ef8636b6-f003-4dcc-9ea1-4c570827a8ea", 00:12:01.181 "is_configured": false, 00:12:01.181 "data_offset": 0, 00:12:01.181 "data_size": 65536 00:12:01.181 }, 00:12:01.181 { 00:12:01.181 "name": "BaseBdev3", 00:12:01.181 "uuid": "11708dfb-2395-457b-afc3-d64d71187fc4", 00:12:01.181 "is_configured": true, 00:12:01.181 "data_offset": 0, 00:12:01.181 "data_size": 65536 00:12:01.181 }, 00:12:01.181 { 00:12:01.181 "name": "BaseBdev4", 00:12:01.181 "uuid": "e8942d62-da45-4139-8d3f-479955954b75", 00:12:01.181 "is_configured": true, 00:12:01.181 "data_offset": 0, 00:12:01.181 "data_size": 65536 00:12:01.181 } 00:12:01.181 ] 00:12:01.181 }' 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.181 17:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.441 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.441 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.441 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:01.441 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.441 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.701 [2024-11-26 17:56:43.370199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.701 BaseBdev1 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.701 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.701 [ 00:12:01.701 { 00:12:01.701 "name": "BaseBdev1", 00:12:01.701 "aliases": [ 00:12:01.701 "f81ea94d-b929-4831-a4b9-2cde42efad0d" 00:12:01.701 ], 00:12:01.701 "product_name": "Malloc disk", 00:12:01.701 "block_size": 512, 00:12:01.701 "num_blocks": 65536, 00:12:01.702 "uuid": "f81ea94d-b929-4831-a4b9-2cde42efad0d", 00:12:01.702 "assigned_rate_limits": { 00:12:01.702 "rw_ios_per_sec": 0, 00:12:01.702 "rw_mbytes_per_sec": 0, 00:12:01.702 "r_mbytes_per_sec": 0, 00:12:01.702 "w_mbytes_per_sec": 0 00:12:01.702 }, 00:12:01.702 "claimed": true, 00:12:01.702 "claim_type": "exclusive_write", 00:12:01.702 "zoned": false, 00:12:01.702 "supported_io_types": { 00:12:01.702 "read": true, 00:12:01.702 "write": true, 00:12:01.702 "unmap": true, 00:12:01.702 "flush": true, 00:12:01.702 "reset": true, 00:12:01.702 "nvme_admin": false, 00:12:01.702 "nvme_io": false, 00:12:01.702 "nvme_io_md": false, 00:12:01.702 "write_zeroes": true, 00:12:01.702 "zcopy": true, 00:12:01.702 "get_zone_info": false, 00:12:01.702 "zone_management": false, 00:12:01.702 "zone_append": false, 00:12:01.702 "compare": false, 00:12:01.702 "compare_and_write": false, 00:12:01.702 "abort": true, 00:12:01.702 "seek_hole": false, 00:12:01.702 "seek_data": false, 00:12:01.702 "copy": true, 00:12:01.702 "nvme_iov_md": false 00:12:01.702 }, 00:12:01.702 "memory_domains": [ 00:12:01.702 { 00:12:01.702 "dma_device_id": "system", 00:12:01.702 "dma_device_type": 1 00:12:01.702 }, 00:12:01.702 { 00:12:01.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.702 "dma_device_type": 2 00:12:01.702 } 00:12:01.702 ], 00:12:01.702 "driver_specific": {} 00:12:01.702 } 00:12:01.702 ] 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.702 "name": "Existed_Raid", 00:12:01.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.702 "strip_size_kb": 0, 00:12:01.702 "state": "configuring", 00:12:01.702 "raid_level": "raid1", 00:12:01.702 "superblock": false, 00:12:01.702 "num_base_bdevs": 4, 00:12:01.702 "num_base_bdevs_discovered": 3, 00:12:01.702 "num_base_bdevs_operational": 4, 00:12:01.702 "base_bdevs_list": [ 00:12:01.702 { 00:12:01.702 "name": "BaseBdev1", 00:12:01.702 "uuid": "f81ea94d-b929-4831-a4b9-2cde42efad0d", 00:12:01.702 "is_configured": true, 00:12:01.702 "data_offset": 0, 00:12:01.702 "data_size": 65536 00:12:01.702 }, 00:12:01.702 { 00:12:01.702 "name": null, 00:12:01.702 "uuid": "ef8636b6-f003-4dcc-9ea1-4c570827a8ea", 00:12:01.702 "is_configured": false, 00:12:01.702 "data_offset": 0, 00:12:01.702 "data_size": 65536 00:12:01.702 }, 00:12:01.702 { 00:12:01.702 "name": "BaseBdev3", 00:12:01.702 "uuid": "11708dfb-2395-457b-afc3-d64d71187fc4", 00:12:01.702 "is_configured": true, 00:12:01.702 "data_offset": 0, 00:12:01.702 "data_size": 65536 00:12:01.702 }, 00:12:01.702 { 00:12:01.702 "name": "BaseBdev4", 00:12:01.702 "uuid": "e8942d62-da45-4139-8d3f-479955954b75", 00:12:01.702 "is_configured": true, 00:12:01.702 "data_offset": 0, 00:12:01.702 "data_size": 65536 00:12:01.702 } 00:12:01.702 ] 00:12:01.702 }' 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.702 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.271 [2024-11-26 17:56:43.913656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.271 "name": "Existed_Raid", 00:12:02.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.271 "strip_size_kb": 0, 00:12:02.271 "state": "configuring", 00:12:02.271 "raid_level": "raid1", 00:12:02.271 "superblock": false, 00:12:02.271 "num_base_bdevs": 4, 00:12:02.271 "num_base_bdevs_discovered": 2, 00:12:02.271 "num_base_bdevs_operational": 4, 00:12:02.271 "base_bdevs_list": [ 00:12:02.271 { 00:12:02.271 "name": "BaseBdev1", 00:12:02.271 "uuid": "f81ea94d-b929-4831-a4b9-2cde42efad0d", 00:12:02.271 "is_configured": true, 00:12:02.271 "data_offset": 0, 00:12:02.271 "data_size": 65536 00:12:02.271 }, 00:12:02.271 { 00:12:02.271 "name": null, 00:12:02.271 "uuid": "ef8636b6-f003-4dcc-9ea1-4c570827a8ea", 00:12:02.271 "is_configured": false, 00:12:02.271 "data_offset": 0, 00:12:02.271 "data_size": 65536 00:12:02.271 }, 00:12:02.271 { 00:12:02.271 "name": null, 00:12:02.271 "uuid": "11708dfb-2395-457b-afc3-d64d71187fc4", 00:12:02.271 "is_configured": false, 00:12:02.271 "data_offset": 0, 00:12:02.271 "data_size": 65536 00:12:02.271 }, 00:12:02.271 { 00:12:02.271 "name": "BaseBdev4", 00:12:02.271 "uuid": "e8942d62-da45-4139-8d3f-479955954b75", 00:12:02.271 "is_configured": true, 00:12:02.271 "data_offset": 0, 00:12:02.271 "data_size": 65536 00:12:02.271 } 00:12:02.271 ] 00:12:02.271 }' 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.271 17:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.552 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.552 17:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.552 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:02.552 17:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.831 17:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.831 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:02.831 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:02.831 17:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.831 17:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.831 [2024-11-26 17:56:44.444777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:02.831 17:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.831 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.831 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.831 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.831 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.831 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.831 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.831 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.831 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.831 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.832 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.832 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.832 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.832 17:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.832 17:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.832 17:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.832 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.832 "name": "Existed_Raid", 00:12:02.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.832 "strip_size_kb": 0, 00:12:02.832 "state": "configuring", 00:12:02.832 "raid_level": "raid1", 00:12:02.832 "superblock": false, 00:12:02.832 "num_base_bdevs": 4, 00:12:02.832 "num_base_bdevs_discovered": 3, 00:12:02.832 "num_base_bdevs_operational": 4, 00:12:02.832 "base_bdevs_list": [ 00:12:02.832 { 00:12:02.832 "name": "BaseBdev1", 00:12:02.832 "uuid": "f81ea94d-b929-4831-a4b9-2cde42efad0d", 00:12:02.832 "is_configured": true, 00:12:02.832 "data_offset": 0, 00:12:02.832 "data_size": 65536 00:12:02.832 }, 00:12:02.832 { 00:12:02.832 "name": null, 00:12:02.832 "uuid": "ef8636b6-f003-4dcc-9ea1-4c570827a8ea", 00:12:02.832 "is_configured": false, 00:12:02.832 "data_offset": 0, 00:12:02.832 "data_size": 65536 00:12:02.832 }, 00:12:02.832 { 00:12:02.832 "name": "BaseBdev3", 00:12:02.832 "uuid": "11708dfb-2395-457b-afc3-d64d71187fc4", 00:12:02.832 "is_configured": true, 00:12:02.832 "data_offset": 0, 00:12:02.832 "data_size": 65536 00:12:02.832 }, 00:12:02.832 { 00:12:02.832 "name": "BaseBdev4", 00:12:02.832 "uuid": "e8942d62-da45-4139-8d3f-479955954b75", 00:12:02.832 "is_configured": true, 00:12:02.832 "data_offset": 0, 00:12:02.832 "data_size": 65536 00:12:02.832 } 00:12:02.832 ] 00:12:02.832 }' 00:12:02.832 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.832 17:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.091 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.091 17:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.091 17:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.091 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:03.091 17:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.350 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:03.350 17:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:03.350 17:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.350 17:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.350 [2024-11-26 17:56:44.971996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.350 "name": "Existed_Raid", 00:12:03.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.350 "strip_size_kb": 0, 00:12:03.350 "state": "configuring", 00:12:03.350 "raid_level": "raid1", 00:12:03.350 "superblock": false, 00:12:03.350 "num_base_bdevs": 4, 00:12:03.350 "num_base_bdevs_discovered": 2, 00:12:03.350 "num_base_bdevs_operational": 4, 00:12:03.350 "base_bdevs_list": [ 00:12:03.350 { 00:12:03.350 "name": null, 00:12:03.350 "uuid": "f81ea94d-b929-4831-a4b9-2cde42efad0d", 00:12:03.350 "is_configured": false, 00:12:03.350 "data_offset": 0, 00:12:03.350 "data_size": 65536 00:12:03.350 }, 00:12:03.350 { 00:12:03.350 "name": null, 00:12:03.350 "uuid": "ef8636b6-f003-4dcc-9ea1-4c570827a8ea", 00:12:03.350 "is_configured": false, 00:12:03.350 "data_offset": 0, 00:12:03.350 "data_size": 65536 00:12:03.350 }, 00:12:03.350 { 00:12:03.350 "name": "BaseBdev3", 00:12:03.350 "uuid": "11708dfb-2395-457b-afc3-d64d71187fc4", 00:12:03.350 "is_configured": true, 00:12:03.350 "data_offset": 0, 00:12:03.350 "data_size": 65536 00:12:03.350 }, 00:12:03.350 { 00:12:03.350 "name": "BaseBdev4", 00:12:03.350 "uuid": "e8942d62-da45-4139-8d3f-479955954b75", 00:12:03.350 "is_configured": true, 00:12:03.350 "data_offset": 0, 00:12:03.350 "data_size": 65536 00:12:03.350 } 00:12:03.350 ] 00:12:03.350 }' 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.350 17:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.918 [2024-11-26 17:56:45.648175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.918 "name": "Existed_Raid", 00:12:03.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.918 "strip_size_kb": 0, 00:12:03.918 "state": "configuring", 00:12:03.918 "raid_level": "raid1", 00:12:03.918 "superblock": false, 00:12:03.918 "num_base_bdevs": 4, 00:12:03.918 "num_base_bdevs_discovered": 3, 00:12:03.918 "num_base_bdevs_operational": 4, 00:12:03.918 "base_bdevs_list": [ 00:12:03.918 { 00:12:03.918 "name": null, 00:12:03.918 "uuid": "f81ea94d-b929-4831-a4b9-2cde42efad0d", 00:12:03.918 "is_configured": false, 00:12:03.918 "data_offset": 0, 00:12:03.918 "data_size": 65536 00:12:03.918 }, 00:12:03.918 { 00:12:03.918 "name": "BaseBdev2", 00:12:03.918 "uuid": "ef8636b6-f003-4dcc-9ea1-4c570827a8ea", 00:12:03.918 "is_configured": true, 00:12:03.918 "data_offset": 0, 00:12:03.918 "data_size": 65536 00:12:03.918 }, 00:12:03.918 { 00:12:03.918 "name": "BaseBdev3", 00:12:03.918 "uuid": "11708dfb-2395-457b-afc3-d64d71187fc4", 00:12:03.918 "is_configured": true, 00:12:03.918 "data_offset": 0, 00:12:03.918 "data_size": 65536 00:12:03.918 }, 00:12:03.918 { 00:12:03.918 "name": "BaseBdev4", 00:12:03.918 "uuid": "e8942d62-da45-4139-8d3f-479955954b75", 00:12:03.918 "is_configured": true, 00:12:03.918 "data_offset": 0, 00:12:03.918 "data_size": 65536 00:12:03.918 } 00:12:03.918 ] 00:12:03.918 }' 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.918 17:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f81ea94d-b929-4831-a4b9-2cde42efad0d 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.488 [2024-11-26 17:56:46.235621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:04.488 [2024-11-26 17:56:46.235693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:04.488 [2024-11-26 17:56:46.235705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:04.488 [2024-11-26 17:56:46.236052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:04.488 [2024-11-26 17:56:46.236238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:04.488 [2024-11-26 17:56:46.236250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:04.488 [2024-11-26 17:56:46.236564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.488 NewBaseBdev 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.488 [ 00:12:04.488 { 00:12:04.488 "name": "NewBaseBdev", 00:12:04.488 "aliases": [ 00:12:04.488 "f81ea94d-b929-4831-a4b9-2cde42efad0d" 00:12:04.488 ], 00:12:04.488 "product_name": "Malloc disk", 00:12:04.488 "block_size": 512, 00:12:04.488 "num_blocks": 65536, 00:12:04.488 "uuid": "f81ea94d-b929-4831-a4b9-2cde42efad0d", 00:12:04.488 "assigned_rate_limits": { 00:12:04.488 "rw_ios_per_sec": 0, 00:12:04.488 "rw_mbytes_per_sec": 0, 00:12:04.488 "r_mbytes_per_sec": 0, 00:12:04.488 "w_mbytes_per_sec": 0 00:12:04.488 }, 00:12:04.488 "claimed": true, 00:12:04.488 "claim_type": "exclusive_write", 00:12:04.488 "zoned": false, 00:12:04.488 "supported_io_types": { 00:12:04.488 "read": true, 00:12:04.488 "write": true, 00:12:04.488 "unmap": true, 00:12:04.488 "flush": true, 00:12:04.488 "reset": true, 00:12:04.488 "nvme_admin": false, 00:12:04.488 "nvme_io": false, 00:12:04.488 "nvme_io_md": false, 00:12:04.488 "write_zeroes": true, 00:12:04.488 "zcopy": true, 00:12:04.488 "get_zone_info": false, 00:12:04.488 "zone_management": false, 00:12:04.488 "zone_append": false, 00:12:04.488 "compare": false, 00:12:04.488 "compare_and_write": false, 00:12:04.488 "abort": true, 00:12:04.488 "seek_hole": false, 00:12:04.488 "seek_data": false, 00:12:04.488 "copy": true, 00:12:04.488 "nvme_iov_md": false 00:12:04.488 }, 00:12:04.488 "memory_domains": [ 00:12:04.488 { 00:12:04.488 "dma_device_id": "system", 00:12:04.488 "dma_device_type": 1 00:12:04.488 }, 00:12:04.488 { 00:12:04.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.488 "dma_device_type": 2 00:12:04.488 } 00:12:04.488 ], 00:12:04.488 "driver_specific": {} 00:12:04.488 } 00:12:04.488 ] 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.488 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.488 "name": "Existed_Raid", 00:12:04.488 "uuid": "8c204892-9ffe-4f1e-9e96-2a440f51b39f", 00:12:04.488 "strip_size_kb": 0, 00:12:04.488 "state": "online", 00:12:04.488 "raid_level": "raid1", 00:12:04.488 "superblock": false, 00:12:04.488 "num_base_bdevs": 4, 00:12:04.488 "num_base_bdevs_discovered": 4, 00:12:04.488 "num_base_bdevs_operational": 4, 00:12:04.488 "base_bdevs_list": [ 00:12:04.488 { 00:12:04.488 "name": "NewBaseBdev", 00:12:04.488 "uuid": "f81ea94d-b929-4831-a4b9-2cde42efad0d", 00:12:04.488 "is_configured": true, 00:12:04.488 "data_offset": 0, 00:12:04.488 "data_size": 65536 00:12:04.488 }, 00:12:04.488 { 00:12:04.488 "name": "BaseBdev2", 00:12:04.488 "uuid": "ef8636b6-f003-4dcc-9ea1-4c570827a8ea", 00:12:04.488 "is_configured": true, 00:12:04.488 "data_offset": 0, 00:12:04.488 "data_size": 65536 00:12:04.488 }, 00:12:04.488 { 00:12:04.488 "name": "BaseBdev3", 00:12:04.488 "uuid": "11708dfb-2395-457b-afc3-d64d71187fc4", 00:12:04.488 "is_configured": true, 00:12:04.488 "data_offset": 0, 00:12:04.488 "data_size": 65536 00:12:04.488 }, 00:12:04.488 { 00:12:04.488 "name": "BaseBdev4", 00:12:04.488 "uuid": "e8942d62-da45-4139-8d3f-479955954b75", 00:12:04.488 "is_configured": true, 00:12:04.488 "data_offset": 0, 00:12:04.488 "data_size": 65536 00:12:04.488 } 00:12:04.488 ] 00:12:04.488 }' 00:12:04.489 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.489 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.056 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.056 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.056 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.056 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.056 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.056 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.056 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.056 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.056 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.056 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.056 [2024-11-26 17:56:46.751382] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.056 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.056 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.056 "name": "Existed_Raid", 00:12:05.056 "aliases": [ 00:12:05.056 "8c204892-9ffe-4f1e-9e96-2a440f51b39f" 00:12:05.056 ], 00:12:05.056 "product_name": "Raid Volume", 00:12:05.056 "block_size": 512, 00:12:05.056 "num_blocks": 65536, 00:12:05.056 "uuid": "8c204892-9ffe-4f1e-9e96-2a440f51b39f", 00:12:05.056 "assigned_rate_limits": { 00:12:05.056 "rw_ios_per_sec": 0, 00:12:05.056 "rw_mbytes_per_sec": 0, 00:12:05.056 "r_mbytes_per_sec": 0, 00:12:05.056 "w_mbytes_per_sec": 0 00:12:05.056 }, 00:12:05.056 "claimed": false, 00:12:05.056 "zoned": false, 00:12:05.056 "supported_io_types": { 00:12:05.056 "read": true, 00:12:05.056 "write": true, 00:12:05.056 "unmap": false, 00:12:05.056 "flush": false, 00:12:05.056 "reset": true, 00:12:05.056 "nvme_admin": false, 00:12:05.056 "nvme_io": false, 00:12:05.056 "nvme_io_md": false, 00:12:05.056 "write_zeroes": true, 00:12:05.056 "zcopy": false, 00:12:05.056 "get_zone_info": false, 00:12:05.056 "zone_management": false, 00:12:05.056 "zone_append": false, 00:12:05.056 "compare": false, 00:12:05.056 "compare_and_write": false, 00:12:05.056 "abort": false, 00:12:05.056 "seek_hole": false, 00:12:05.056 "seek_data": false, 00:12:05.056 "copy": false, 00:12:05.056 "nvme_iov_md": false 00:12:05.056 }, 00:12:05.056 "memory_domains": [ 00:12:05.056 { 00:12:05.056 "dma_device_id": "system", 00:12:05.056 "dma_device_type": 1 00:12:05.056 }, 00:12:05.056 { 00:12:05.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.056 "dma_device_type": 2 00:12:05.056 }, 00:12:05.056 { 00:12:05.056 "dma_device_id": "system", 00:12:05.056 "dma_device_type": 1 00:12:05.056 }, 00:12:05.057 { 00:12:05.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.057 "dma_device_type": 2 00:12:05.057 }, 00:12:05.057 { 00:12:05.057 "dma_device_id": "system", 00:12:05.057 "dma_device_type": 1 00:12:05.057 }, 00:12:05.057 { 00:12:05.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.057 "dma_device_type": 2 00:12:05.057 }, 00:12:05.057 { 00:12:05.057 "dma_device_id": "system", 00:12:05.057 "dma_device_type": 1 00:12:05.057 }, 00:12:05.057 { 00:12:05.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.057 "dma_device_type": 2 00:12:05.057 } 00:12:05.057 ], 00:12:05.057 "driver_specific": { 00:12:05.057 "raid": { 00:12:05.057 "uuid": "8c204892-9ffe-4f1e-9e96-2a440f51b39f", 00:12:05.057 "strip_size_kb": 0, 00:12:05.057 "state": "online", 00:12:05.057 "raid_level": "raid1", 00:12:05.057 "superblock": false, 00:12:05.057 "num_base_bdevs": 4, 00:12:05.057 "num_base_bdevs_discovered": 4, 00:12:05.057 "num_base_bdevs_operational": 4, 00:12:05.057 "base_bdevs_list": [ 00:12:05.057 { 00:12:05.057 "name": "NewBaseBdev", 00:12:05.057 "uuid": "f81ea94d-b929-4831-a4b9-2cde42efad0d", 00:12:05.057 "is_configured": true, 00:12:05.057 "data_offset": 0, 00:12:05.057 "data_size": 65536 00:12:05.057 }, 00:12:05.057 { 00:12:05.057 "name": "BaseBdev2", 00:12:05.057 "uuid": "ef8636b6-f003-4dcc-9ea1-4c570827a8ea", 00:12:05.057 "is_configured": true, 00:12:05.057 "data_offset": 0, 00:12:05.057 "data_size": 65536 00:12:05.057 }, 00:12:05.057 { 00:12:05.057 "name": "BaseBdev3", 00:12:05.057 "uuid": "11708dfb-2395-457b-afc3-d64d71187fc4", 00:12:05.057 "is_configured": true, 00:12:05.057 "data_offset": 0, 00:12:05.057 "data_size": 65536 00:12:05.057 }, 00:12:05.057 { 00:12:05.057 "name": "BaseBdev4", 00:12:05.057 "uuid": "e8942d62-da45-4139-8d3f-479955954b75", 00:12:05.057 "is_configured": true, 00:12:05.057 "data_offset": 0, 00:12:05.057 "data_size": 65536 00:12:05.057 } 00:12:05.057 ] 00:12:05.057 } 00:12:05.057 } 00:12:05.057 }' 00:12:05.057 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.057 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:05.057 BaseBdev2 00:12:05.057 BaseBdev3 00:12:05.057 BaseBdev4' 00:12:05.057 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.057 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.057 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.057 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:05.057 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.057 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.057 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.057 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.317 17:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.317 17:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.317 17:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.317 17:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.317 17:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:05.317 17:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.317 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.317 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.318 [2024-11-26 17:56:47.082369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:05.318 [2024-11-26 17:56:47.082411] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.318 [2024-11-26 17:56:47.082524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.318 [2024-11-26 17:56:47.082872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.318 [2024-11-26 17:56:47.082888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73479 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73479 ']' 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73479 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73479 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73479' 00:12:05.318 killing process with pid 73479 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73479 00:12:05.318 [2024-11-26 17:56:47.126027] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:05.318 17:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73479 00:12:05.887 [2024-11-26 17:56:47.614521] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.264 17:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:07.264 ************************************ 00:12:07.264 END TEST raid_state_function_test 00:12:07.264 ************************************ 00:12:07.264 00:12:07.264 real 0m12.520s 00:12:07.264 user 0m19.740s 00:12:07.264 sys 0m2.037s 00:12:07.264 17:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.264 17:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.264 17:56:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:07.264 17:56:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:07.264 17:56:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.264 17:56:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:07.264 ************************************ 00:12:07.264 START TEST raid_state_function_test_sb 00:12:07.264 ************************************ 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:07.264 Process raid pid: 74163 00:12:07.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74163 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74163' 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74163 00:12:07.264 17:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74163 ']' 00:12:07.265 17:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.265 17:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.265 17:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.265 17:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.265 17:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:07.265 17:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.530 [2024-11-26 17:56:49.152708] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:12:07.530 [2024-11-26 17:56:49.153469] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.530 [2024-11-26 17:56:49.334145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.792 [2024-11-26 17:56:49.475843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.051 [2024-11-26 17:56:49.726863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.051 [2024-11-26 17:56:49.727053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.310 [2024-11-26 17:56:50.065807] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.310 [2024-11-26 17:56:50.065922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.310 [2024-11-26 17:56:50.065963] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.310 [2024-11-26 17:56:50.066000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.310 [2024-11-26 17:56:50.066047] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:08.310 [2024-11-26 17:56:50.066086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:08.310 [2024-11-26 17:56:50.066118] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:08.310 [2024-11-26 17:56:50.066153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.310 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.310 "name": "Existed_Raid", 00:12:08.310 "uuid": "b0360563-5d35-40c2-a02b-ca7f38a79156", 00:12:08.310 "strip_size_kb": 0, 00:12:08.310 "state": "configuring", 00:12:08.310 "raid_level": "raid1", 00:12:08.310 "superblock": true, 00:12:08.310 "num_base_bdevs": 4, 00:12:08.310 "num_base_bdevs_discovered": 0, 00:12:08.310 "num_base_bdevs_operational": 4, 00:12:08.310 "base_bdevs_list": [ 00:12:08.310 { 00:12:08.310 "name": "BaseBdev1", 00:12:08.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.311 "is_configured": false, 00:12:08.311 "data_offset": 0, 00:12:08.311 "data_size": 0 00:12:08.311 }, 00:12:08.311 { 00:12:08.311 "name": "BaseBdev2", 00:12:08.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.311 "is_configured": false, 00:12:08.311 "data_offset": 0, 00:12:08.311 "data_size": 0 00:12:08.311 }, 00:12:08.311 { 00:12:08.311 "name": "BaseBdev3", 00:12:08.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.311 "is_configured": false, 00:12:08.311 "data_offset": 0, 00:12:08.311 "data_size": 0 00:12:08.311 }, 00:12:08.311 { 00:12:08.311 "name": "BaseBdev4", 00:12:08.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.311 "is_configured": false, 00:12:08.311 "data_offset": 0, 00:12:08.311 "data_size": 0 00:12:08.311 } 00:12:08.311 ] 00:12:08.311 }' 00:12:08.311 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.311 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.890 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.891 [2024-11-26 17:56:50.497239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:08.891 [2024-11-26 17:56:50.497293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.891 [2024-11-26 17:56:50.509243] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.891 [2024-11-26 17:56:50.509306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.891 [2024-11-26 17:56:50.509317] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.891 [2024-11-26 17:56:50.509329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.891 [2024-11-26 17:56:50.509337] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:08.891 [2024-11-26 17:56:50.509348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:08.891 [2024-11-26 17:56:50.509355] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:08.891 [2024-11-26 17:56:50.509366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.891 [2024-11-26 17:56:50.565962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.891 BaseBdev1 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.891 [ 00:12:08.891 { 00:12:08.891 "name": "BaseBdev1", 00:12:08.891 "aliases": [ 00:12:08.891 "490a9da2-a6fc-4fa1-aaa2-f1747f58aed5" 00:12:08.891 ], 00:12:08.891 "product_name": "Malloc disk", 00:12:08.891 "block_size": 512, 00:12:08.891 "num_blocks": 65536, 00:12:08.891 "uuid": "490a9da2-a6fc-4fa1-aaa2-f1747f58aed5", 00:12:08.891 "assigned_rate_limits": { 00:12:08.891 "rw_ios_per_sec": 0, 00:12:08.891 "rw_mbytes_per_sec": 0, 00:12:08.891 "r_mbytes_per_sec": 0, 00:12:08.891 "w_mbytes_per_sec": 0 00:12:08.891 }, 00:12:08.891 "claimed": true, 00:12:08.891 "claim_type": "exclusive_write", 00:12:08.891 "zoned": false, 00:12:08.891 "supported_io_types": { 00:12:08.891 "read": true, 00:12:08.891 "write": true, 00:12:08.891 "unmap": true, 00:12:08.891 "flush": true, 00:12:08.891 "reset": true, 00:12:08.891 "nvme_admin": false, 00:12:08.891 "nvme_io": false, 00:12:08.891 "nvme_io_md": false, 00:12:08.891 "write_zeroes": true, 00:12:08.891 "zcopy": true, 00:12:08.891 "get_zone_info": false, 00:12:08.891 "zone_management": false, 00:12:08.891 "zone_append": false, 00:12:08.891 "compare": false, 00:12:08.891 "compare_and_write": false, 00:12:08.891 "abort": true, 00:12:08.891 "seek_hole": false, 00:12:08.891 "seek_data": false, 00:12:08.891 "copy": true, 00:12:08.891 "nvme_iov_md": false 00:12:08.891 }, 00:12:08.891 "memory_domains": [ 00:12:08.891 { 00:12:08.891 "dma_device_id": "system", 00:12:08.891 "dma_device_type": 1 00:12:08.891 }, 00:12:08.891 { 00:12:08.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.891 "dma_device_type": 2 00:12:08.891 } 00:12:08.891 ], 00:12:08.891 "driver_specific": {} 00:12:08.891 } 00:12:08.891 ] 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.891 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.891 "name": "Existed_Raid", 00:12:08.891 "uuid": "3b77bd2f-c641-437c-975d-ad85f313add0", 00:12:08.891 "strip_size_kb": 0, 00:12:08.891 "state": "configuring", 00:12:08.891 "raid_level": "raid1", 00:12:08.891 "superblock": true, 00:12:08.891 "num_base_bdevs": 4, 00:12:08.891 "num_base_bdevs_discovered": 1, 00:12:08.891 "num_base_bdevs_operational": 4, 00:12:08.891 "base_bdevs_list": [ 00:12:08.891 { 00:12:08.891 "name": "BaseBdev1", 00:12:08.891 "uuid": "490a9da2-a6fc-4fa1-aaa2-f1747f58aed5", 00:12:08.891 "is_configured": true, 00:12:08.891 "data_offset": 2048, 00:12:08.891 "data_size": 63488 00:12:08.891 }, 00:12:08.891 { 00:12:08.891 "name": "BaseBdev2", 00:12:08.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.891 "is_configured": false, 00:12:08.891 "data_offset": 0, 00:12:08.891 "data_size": 0 00:12:08.891 }, 00:12:08.892 { 00:12:08.892 "name": "BaseBdev3", 00:12:08.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.892 "is_configured": false, 00:12:08.892 "data_offset": 0, 00:12:08.892 "data_size": 0 00:12:08.892 }, 00:12:08.892 { 00:12:08.892 "name": "BaseBdev4", 00:12:08.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.892 "is_configured": false, 00:12:08.892 "data_offset": 0, 00:12:08.892 "data_size": 0 00:12:08.892 } 00:12:08.892 ] 00:12:08.892 }' 00:12:08.892 17:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.892 17:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.457 [2024-11-26 17:56:51.097203] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.457 [2024-11-26 17:56:51.097273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.457 [2024-11-26 17:56:51.105274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.457 [2024-11-26 17:56:51.107447] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:09.457 [2024-11-26 17:56:51.107531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:09.457 [2024-11-26 17:56:51.107570] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:09.457 [2024-11-26 17:56:51.107614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:09.457 [2024-11-26 17:56:51.107646] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:09.457 [2024-11-26 17:56:51.107673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.457 "name": "Existed_Raid", 00:12:09.457 "uuid": "d97940b4-d836-4a0a-b69e-8765e9512473", 00:12:09.457 "strip_size_kb": 0, 00:12:09.457 "state": "configuring", 00:12:09.457 "raid_level": "raid1", 00:12:09.457 "superblock": true, 00:12:09.457 "num_base_bdevs": 4, 00:12:09.457 "num_base_bdevs_discovered": 1, 00:12:09.457 "num_base_bdevs_operational": 4, 00:12:09.457 "base_bdevs_list": [ 00:12:09.457 { 00:12:09.457 "name": "BaseBdev1", 00:12:09.457 "uuid": "490a9da2-a6fc-4fa1-aaa2-f1747f58aed5", 00:12:09.457 "is_configured": true, 00:12:09.457 "data_offset": 2048, 00:12:09.457 "data_size": 63488 00:12:09.457 }, 00:12:09.457 { 00:12:09.457 "name": "BaseBdev2", 00:12:09.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.457 "is_configured": false, 00:12:09.457 "data_offset": 0, 00:12:09.457 "data_size": 0 00:12:09.457 }, 00:12:09.457 { 00:12:09.457 "name": "BaseBdev3", 00:12:09.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.457 "is_configured": false, 00:12:09.457 "data_offset": 0, 00:12:09.457 "data_size": 0 00:12:09.457 }, 00:12:09.457 { 00:12:09.457 "name": "BaseBdev4", 00:12:09.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.457 "is_configured": false, 00:12:09.457 "data_offset": 0, 00:12:09.457 "data_size": 0 00:12:09.457 } 00:12:09.457 ] 00:12:09.457 }' 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.457 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.716 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:09.716 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.716 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.974 [2024-11-26 17:56:51.600626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.974 BaseBdev2 00:12:09.974 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.974 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:09.974 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:09.974 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.974 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:09.974 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.974 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.974 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.974 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.974 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.974 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.974 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:09.974 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.974 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.974 [ 00:12:09.974 { 00:12:09.974 "name": "BaseBdev2", 00:12:09.974 "aliases": [ 00:12:09.974 "36455f22-26b3-4814-9722-3833499054ed" 00:12:09.974 ], 00:12:09.974 "product_name": "Malloc disk", 00:12:09.974 "block_size": 512, 00:12:09.974 "num_blocks": 65536, 00:12:09.974 "uuid": "36455f22-26b3-4814-9722-3833499054ed", 00:12:09.974 "assigned_rate_limits": { 00:12:09.974 "rw_ios_per_sec": 0, 00:12:09.974 "rw_mbytes_per_sec": 0, 00:12:09.974 "r_mbytes_per_sec": 0, 00:12:09.974 "w_mbytes_per_sec": 0 00:12:09.974 }, 00:12:09.974 "claimed": true, 00:12:09.974 "claim_type": "exclusive_write", 00:12:09.974 "zoned": false, 00:12:09.974 "supported_io_types": { 00:12:09.974 "read": true, 00:12:09.974 "write": true, 00:12:09.974 "unmap": true, 00:12:09.974 "flush": true, 00:12:09.974 "reset": true, 00:12:09.974 "nvme_admin": false, 00:12:09.974 "nvme_io": false, 00:12:09.974 "nvme_io_md": false, 00:12:09.974 "write_zeroes": true, 00:12:09.974 "zcopy": true, 00:12:09.974 "get_zone_info": false, 00:12:09.974 "zone_management": false, 00:12:09.974 "zone_append": false, 00:12:09.974 "compare": false, 00:12:09.974 "compare_and_write": false, 00:12:09.974 "abort": true, 00:12:09.974 "seek_hole": false, 00:12:09.974 "seek_data": false, 00:12:09.974 "copy": true, 00:12:09.974 "nvme_iov_md": false 00:12:09.974 }, 00:12:09.974 "memory_domains": [ 00:12:09.974 { 00:12:09.974 "dma_device_id": "system", 00:12:09.974 "dma_device_type": 1 00:12:09.974 }, 00:12:09.974 { 00:12:09.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.974 "dma_device_type": 2 00:12:09.974 } 00:12:09.974 ], 00:12:09.974 "driver_specific": {} 00:12:09.974 } 00:12:09.975 ] 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.975 "name": "Existed_Raid", 00:12:09.975 "uuid": "d97940b4-d836-4a0a-b69e-8765e9512473", 00:12:09.975 "strip_size_kb": 0, 00:12:09.975 "state": "configuring", 00:12:09.975 "raid_level": "raid1", 00:12:09.975 "superblock": true, 00:12:09.975 "num_base_bdevs": 4, 00:12:09.975 "num_base_bdevs_discovered": 2, 00:12:09.975 "num_base_bdevs_operational": 4, 00:12:09.975 "base_bdevs_list": [ 00:12:09.975 { 00:12:09.975 "name": "BaseBdev1", 00:12:09.975 "uuid": "490a9da2-a6fc-4fa1-aaa2-f1747f58aed5", 00:12:09.975 "is_configured": true, 00:12:09.975 "data_offset": 2048, 00:12:09.975 "data_size": 63488 00:12:09.975 }, 00:12:09.975 { 00:12:09.975 "name": "BaseBdev2", 00:12:09.975 "uuid": "36455f22-26b3-4814-9722-3833499054ed", 00:12:09.975 "is_configured": true, 00:12:09.975 "data_offset": 2048, 00:12:09.975 "data_size": 63488 00:12:09.975 }, 00:12:09.975 { 00:12:09.975 "name": "BaseBdev3", 00:12:09.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.975 "is_configured": false, 00:12:09.975 "data_offset": 0, 00:12:09.975 "data_size": 0 00:12:09.975 }, 00:12:09.975 { 00:12:09.975 "name": "BaseBdev4", 00:12:09.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.975 "is_configured": false, 00:12:09.975 "data_offset": 0, 00:12:09.975 "data_size": 0 00:12:09.975 } 00:12:09.975 ] 00:12:09.975 }' 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.975 17:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.233 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:10.233 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.233 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.491 [2024-11-26 17:56:52.134719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.491 BaseBdev3 00:12:10.491 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.491 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:10.491 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:10.491 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.491 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:10.491 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.491 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.491 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.491 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.491 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.491 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.491 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:10.491 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.491 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.491 [ 00:12:10.491 { 00:12:10.491 "name": "BaseBdev3", 00:12:10.491 "aliases": [ 00:12:10.491 "92231574-cb31-41e0-b0a1-a1e39756e46b" 00:12:10.491 ], 00:12:10.491 "product_name": "Malloc disk", 00:12:10.491 "block_size": 512, 00:12:10.491 "num_blocks": 65536, 00:12:10.491 "uuid": "92231574-cb31-41e0-b0a1-a1e39756e46b", 00:12:10.491 "assigned_rate_limits": { 00:12:10.491 "rw_ios_per_sec": 0, 00:12:10.491 "rw_mbytes_per_sec": 0, 00:12:10.491 "r_mbytes_per_sec": 0, 00:12:10.491 "w_mbytes_per_sec": 0 00:12:10.491 }, 00:12:10.491 "claimed": true, 00:12:10.491 "claim_type": "exclusive_write", 00:12:10.491 "zoned": false, 00:12:10.491 "supported_io_types": { 00:12:10.491 "read": true, 00:12:10.491 "write": true, 00:12:10.491 "unmap": true, 00:12:10.491 "flush": true, 00:12:10.491 "reset": true, 00:12:10.491 "nvme_admin": false, 00:12:10.491 "nvme_io": false, 00:12:10.491 "nvme_io_md": false, 00:12:10.491 "write_zeroes": true, 00:12:10.491 "zcopy": true, 00:12:10.491 "get_zone_info": false, 00:12:10.492 "zone_management": false, 00:12:10.492 "zone_append": false, 00:12:10.492 "compare": false, 00:12:10.492 "compare_and_write": false, 00:12:10.492 "abort": true, 00:12:10.492 "seek_hole": false, 00:12:10.492 "seek_data": false, 00:12:10.492 "copy": true, 00:12:10.492 "nvme_iov_md": false 00:12:10.492 }, 00:12:10.492 "memory_domains": [ 00:12:10.492 { 00:12:10.492 "dma_device_id": "system", 00:12:10.492 "dma_device_type": 1 00:12:10.492 }, 00:12:10.492 { 00:12:10.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.492 "dma_device_type": 2 00:12:10.492 } 00:12:10.492 ], 00:12:10.492 "driver_specific": {} 00:12:10.492 } 00:12:10.492 ] 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.492 "name": "Existed_Raid", 00:12:10.492 "uuid": "d97940b4-d836-4a0a-b69e-8765e9512473", 00:12:10.492 "strip_size_kb": 0, 00:12:10.492 "state": "configuring", 00:12:10.492 "raid_level": "raid1", 00:12:10.492 "superblock": true, 00:12:10.492 "num_base_bdevs": 4, 00:12:10.492 "num_base_bdevs_discovered": 3, 00:12:10.492 "num_base_bdevs_operational": 4, 00:12:10.492 "base_bdevs_list": [ 00:12:10.492 { 00:12:10.492 "name": "BaseBdev1", 00:12:10.492 "uuid": "490a9da2-a6fc-4fa1-aaa2-f1747f58aed5", 00:12:10.492 "is_configured": true, 00:12:10.492 "data_offset": 2048, 00:12:10.492 "data_size": 63488 00:12:10.492 }, 00:12:10.492 { 00:12:10.492 "name": "BaseBdev2", 00:12:10.492 "uuid": "36455f22-26b3-4814-9722-3833499054ed", 00:12:10.492 "is_configured": true, 00:12:10.492 "data_offset": 2048, 00:12:10.492 "data_size": 63488 00:12:10.492 }, 00:12:10.492 { 00:12:10.492 "name": "BaseBdev3", 00:12:10.492 "uuid": "92231574-cb31-41e0-b0a1-a1e39756e46b", 00:12:10.492 "is_configured": true, 00:12:10.492 "data_offset": 2048, 00:12:10.492 "data_size": 63488 00:12:10.492 }, 00:12:10.492 { 00:12:10.492 "name": "BaseBdev4", 00:12:10.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.492 "is_configured": false, 00:12:10.492 "data_offset": 0, 00:12:10.492 "data_size": 0 00:12:10.492 } 00:12:10.492 ] 00:12:10.492 }' 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.492 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.056 [2024-11-26 17:56:52.690643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:11.056 [2024-11-26 17:56:52.690987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:11.056 [2024-11-26 17:56:52.691006] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:11.056 [2024-11-26 17:56:52.691350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:11.056 [2024-11-26 17:56:52.691539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:11.056 [2024-11-26 17:56:52.691555] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:11.056 BaseBdev4 00:12:11.056 [2024-11-26 17:56:52.691753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.056 [ 00:12:11.056 { 00:12:11.056 "name": "BaseBdev4", 00:12:11.056 "aliases": [ 00:12:11.056 "64e4ec24-4971-4310-be0a-056db873c2b7" 00:12:11.056 ], 00:12:11.056 "product_name": "Malloc disk", 00:12:11.056 "block_size": 512, 00:12:11.056 "num_blocks": 65536, 00:12:11.056 "uuid": "64e4ec24-4971-4310-be0a-056db873c2b7", 00:12:11.056 "assigned_rate_limits": { 00:12:11.056 "rw_ios_per_sec": 0, 00:12:11.056 "rw_mbytes_per_sec": 0, 00:12:11.056 "r_mbytes_per_sec": 0, 00:12:11.056 "w_mbytes_per_sec": 0 00:12:11.056 }, 00:12:11.056 "claimed": true, 00:12:11.056 "claim_type": "exclusive_write", 00:12:11.056 "zoned": false, 00:12:11.056 "supported_io_types": { 00:12:11.056 "read": true, 00:12:11.056 "write": true, 00:12:11.056 "unmap": true, 00:12:11.056 "flush": true, 00:12:11.056 "reset": true, 00:12:11.056 "nvme_admin": false, 00:12:11.056 "nvme_io": false, 00:12:11.056 "nvme_io_md": false, 00:12:11.056 "write_zeroes": true, 00:12:11.056 "zcopy": true, 00:12:11.056 "get_zone_info": false, 00:12:11.056 "zone_management": false, 00:12:11.056 "zone_append": false, 00:12:11.056 "compare": false, 00:12:11.056 "compare_and_write": false, 00:12:11.056 "abort": true, 00:12:11.056 "seek_hole": false, 00:12:11.056 "seek_data": false, 00:12:11.056 "copy": true, 00:12:11.056 "nvme_iov_md": false 00:12:11.056 }, 00:12:11.056 "memory_domains": [ 00:12:11.056 { 00:12:11.056 "dma_device_id": "system", 00:12:11.056 "dma_device_type": 1 00:12:11.056 }, 00:12:11.056 { 00:12:11.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.056 "dma_device_type": 2 00:12:11.056 } 00:12:11.056 ], 00:12:11.056 "driver_specific": {} 00:12:11.056 } 00:12:11.056 ] 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.056 "name": "Existed_Raid", 00:12:11.056 "uuid": "d97940b4-d836-4a0a-b69e-8765e9512473", 00:12:11.056 "strip_size_kb": 0, 00:12:11.056 "state": "online", 00:12:11.056 "raid_level": "raid1", 00:12:11.056 "superblock": true, 00:12:11.056 "num_base_bdevs": 4, 00:12:11.056 "num_base_bdevs_discovered": 4, 00:12:11.056 "num_base_bdevs_operational": 4, 00:12:11.056 "base_bdevs_list": [ 00:12:11.056 { 00:12:11.056 "name": "BaseBdev1", 00:12:11.056 "uuid": "490a9da2-a6fc-4fa1-aaa2-f1747f58aed5", 00:12:11.056 "is_configured": true, 00:12:11.056 "data_offset": 2048, 00:12:11.056 "data_size": 63488 00:12:11.056 }, 00:12:11.056 { 00:12:11.056 "name": "BaseBdev2", 00:12:11.056 "uuid": "36455f22-26b3-4814-9722-3833499054ed", 00:12:11.056 "is_configured": true, 00:12:11.056 "data_offset": 2048, 00:12:11.056 "data_size": 63488 00:12:11.056 }, 00:12:11.056 { 00:12:11.056 "name": "BaseBdev3", 00:12:11.056 "uuid": "92231574-cb31-41e0-b0a1-a1e39756e46b", 00:12:11.056 "is_configured": true, 00:12:11.056 "data_offset": 2048, 00:12:11.056 "data_size": 63488 00:12:11.056 }, 00:12:11.056 { 00:12:11.056 "name": "BaseBdev4", 00:12:11.056 "uuid": "64e4ec24-4971-4310-be0a-056db873c2b7", 00:12:11.056 "is_configured": true, 00:12:11.056 "data_offset": 2048, 00:12:11.056 "data_size": 63488 00:12:11.056 } 00:12:11.056 ] 00:12:11.056 }' 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.056 17:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.624 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:11.624 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:11.624 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:11.624 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:11.624 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:11.624 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:11.624 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:11.624 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:11.624 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.624 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.624 [2024-11-26 17:56:53.270380] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.624 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.624 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:11.624 "name": "Existed_Raid", 00:12:11.624 "aliases": [ 00:12:11.624 "d97940b4-d836-4a0a-b69e-8765e9512473" 00:12:11.624 ], 00:12:11.624 "product_name": "Raid Volume", 00:12:11.624 "block_size": 512, 00:12:11.624 "num_blocks": 63488, 00:12:11.624 "uuid": "d97940b4-d836-4a0a-b69e-8765e9512473", 00:12:11.624 "assigned_rate_limits": { 00:12:11.624 "rw_ios_per_sec": 0, 00:12:11.624 "rw_mbytes_per_sec": 0, 00:12:11.624 "r_mbytes_per_sec": 0, 00:12:11.624 "w_mbytes_per_sec": 0 00:12:11.624 }, 00:12:11.624 "claimed": false, 00:12:11.624 "zoned": false, 00:12:11.625 "supported_io_types": { 00:12:11.625 "read": true, 00:12:11.625 "write": true, 00:12:11.625 "unmap": false, 00:12:11.625 "flush": false, 00:12:11.625 "reset": true, 00:12:11.625 "nvme_admin": false, 00:12:11.625 "nvme_io": false, 00:12:11.625 "nvme_io_md": false, 00:12:11.625 "write_zeroes": true, 00:12:11.625 "zcopy": false, 00:12:11.625 "get_zone_info": false, 00:12:11.625 "zone_management": false, 00:12:11.625 "zone_append": false, 00:12:11.625 "compare": false, 00:12:11.625 "compare_and_write": false, 00:12:11.625 "abort": false, 00:12:11.625 "seek_hole": false, 00:12:11.625 "seek_data": false, 00:12:11.625 "copy": false, 00:12:11.625 "nvme_iov_md": false 00:12:11.625 }, 00:12:11.625 "memory_domains": [ 00:12:11.625 { 00:12:11.625 "dma_device_id": "system", 00:12:11.625 "dma_device_type": 1 00:12:11.625 }, 00:12:11.625 { 00:12:11.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.625 "dma_device_type": 2 00:12:11.625 }, 00:12:11.625 { 00:12:11.625 "dma_device_id": "system", 00:12:11.625 "dma_device_type": 1 00:12:11.625 }, 00:12:11.625 { 00:12:11.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.625 "dma_device_type": 2 00:12:11.625 }, 00:12:11.625 { 00:12:11.625 "dma_device_id": "system", 00:12:11.625 "dma_device_type": 1 00:12:11.625 }, 00:12:11.625 { 00:12:11.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.625 "dma_device_type": 2 00:12:11.625 }, 00:12:11.625 { 00:12:11.625 "dma_device_id": "system", 00:12:11.625 "dma_device_type": 1 00:12:11.625 }, 00:12:11.625 { 00:12:11.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.625 "dma_device_type": 2 00:12:11.625 } 00:12:11.625 ], 00:12:11.625 "driver_specific": { 00:12:11.625 "raid": { 00:12:11.625 "uuid": "d97940b4-d836-4a0a-b69e-8765e9512473", 00:12:11.625 "strip_size_kb": 0, 00:12:11.625 "state": "online", 00:12:11.625 "raid_level": "raid1", 00:12:11.625 "superblock": true, 00:12:11.625 "num_base_bdevs": 4, 00:12:11.625 "num_base_bdevs_discovered": 4, 00:12:11.625 "num_base_bdevs_operational": 4, 00:12:11.625 "base_bdevs_list": [ 00:12:11.625 { 00:12:11.625 "name": "BaseBdev1", 00:12:11.625 "uuid": "490a9da2-a6fc-4fa1-aaa2-f1747f58aed5", 00:12:11.625 "is_configured": true, 00:12:11.625 "data_offset": 2048, 00:12:11.625 "data_size": 63488 00:12:11.625 }, 00:12:11.625 { 00:12:11.625 "name": "BaseBdev2", 00:12:11.625 "uuid": "36455f22-26b3-4814-9722-3833499054ed", 00:12:11.625 "is_configured": true, 00:12:11.625 "data_offset": 2048, 00:12:11.625 "data_size": 63488 00:12:11.625 }, 00:12:11.625 { 00:12:11.625 "name": "BaseBdev3", 00:12:11.625 "uuid": "92231574-cb31-41e0-b0a1-a1e39756e46b", 00:12:11.625 "is_configured": true, 00:12:11.625 "data_offset": 2048, 00:12:11.625 "data_size": 63488 00:12:11.625 }, 00:12:11.625 { 00:12:11.625 "name": "BaseBdev4", 00:12:11.625 "uuid": "64e4ec24-4971-4310-be0a-056db873c2b7", 00:12:11.625 "is_configured": true, 00:12:11.625 "data_offset": 2048, 00:12:11.625 "data_size": 63488 00:12:11.625 } 00:12:11.625 ] 00:12:11.625 } 00:12:11.625 } 00:12:11.625 }' 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:11.625 BaseBdev2 00:12:11.625 BaseBdev3 00:12:11.625 BaseBdev4' 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.625 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.885 [2024-11-26 17:56:53.577526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.885 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.145 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.145 "name": "Existed_Raid", 00:12:12.145 "uuid": "d97940b4-d836-4a0a-b69e-8765e9512473", 00:12:12.145 "strip_size_kb": 0, 00:12:12.145 "state": "online", 00:12:12.145 "raid_level": "raid1", 00:12:12.145 "superblock": true, 00:12:12.145 "num_base_bdevs": 4, 00:12:12.145 "num_base_bdevs_discovered": 3, 00:12:12.145 "num_base_bdevs_operational": 3, 00:12:12.145 "base_bdevs_list": [ 00:12:12.145 { 00:12:12.145 "name": null, 00:12:12.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.145 "is_configured": false, 00:12:12.145 "data_offset": 0, 00:12:12.145 "data_size": 63488 00:12:12.145 }, 00:12:12.145 { 00:12:12.145 "name": "BaseBdev2", 00:12:12.145 "uuid": "36455f22-26b3-4814-9722-3833499054ed", 00:12:12.145 "is_configured": true, 00:12:12.145 "data_offset": 2048, 00:12:12.145 "data_size": 63488 00:12:12.145 }, 00:12:12.145 { 00:12:12.145 "name": "BaseBdev3", 00:12:12.145 "uuid": "92231574-cb31-41e0-b0a1-a1e39756e46b", 00:12:12.145 "is_configured": true, 00:12:12.145 "data_offset": 2048, 00:12:12.145 "data_size": 63488 00:12:12.145 }, 00:12:12.145 { 00:12:12.145 "name": "BaseBdev4", 00:12:12.145 "uuid": "64e4ec24-4971-4310-be0a-056db873c2b7", 00:12:12.145 "is_configured": true, 00:12:12.145 "data_offset": 2048, 00:12:12.145 "data_size": 63488 00:12:12.145 } 00:12:12.145 ] 00:12:12.145 }' 00:12:12.145 17:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.145 17:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.403 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:12.404 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:12.404 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.404 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.404 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.404 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:12.404 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.404 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:12.404 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:12.404 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:12.404 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.404 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.404 [2024-11-26 17:56:54.219961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.663 [2024-11-26 17:56:54.374061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.663 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.922 [2024-11-26 17:56:54.540740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:12.922 [2024-11-26 17:56:54.540870] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.922 [2024-11-26 17:56:54.659295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.922 [2024-11-26 17:56:54.659366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.922 [2024-11-26 17:56:54.659381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.922 BaseBdev2 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.922 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.183 [ 00:12:13.183 { 00:12:13.183 "name": "BaseBdev2", 00:12:13.183 "aliases": [ 00:12:13.183 "b77982a7-02d0-47d4-9021-3dc1a1a1e4b4" 00:12:13.183 ], 00:12:13.183 "product_name": "Malloc disk", 00:12:13.183 "block_size": 512, 00:12:13.183 "num_blocks": 65536, 00:12:13.183 "uuid": "b77982a7-02d0-47d4-9021-3dc1a1a1e4b4", 00:12:13.183 "assigned_rate_limits": { 00:12:13.183 "rw_ios_per_sec": 0, 00:12:13.183 "rw_mbytes_per_sec": 0, 00:12:13.183 "r_mbytes_per_sec": 0, 00:12:13.183 "w_mbytes_per_sec": 0 00:12:13.183 }, 00:12:13.183 "claimed": false, 00:12:13.183 "zoned": false, 00:12:13.183 "supported_io_types": { 00:12:13.183 "read": true, 00:12:13.183 "write": true, 00:12:13.183 "unmap": true, 00:12:13.183 "flush": true, 00:12:13.183 "reset": true, 00:12:13.183 "nvme_admin": false, 00:12:13.183 "nvme_io": false, 00:12:13.183 "nvme_io_md": false, 00:12:13.183 "write_zeroes": true, 00:12:13.183 "zcopy": true, 00:12:13.183 "get_zone_info": false, 00:12:13.183 "zone_management": false, 00:12:13.183 "zone_append": false, 00:12:13.183 "compare": false, 00:12:13.183 "compare_and_write": false, 00:12:13.183 "abort": true, 00:12:13.183 "seek_hole": false, 00:12:13.183 "seek_data": false, 00:12:13.183 "copy": true, 00:12:13.183 "nvme_iov_md": false 00:12:13.183 }, 00:12:13.183 "memory_domains": [ 00:12:13.183 { 00:12:13.183 "dma_device_id": "system", 00:12:13.183 "dma_device_type": 1 00:12:13.183 }, 00:12:13.183 { 00:12:13.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.183 "dma_device_type": 2 00:12:13.183 } 00:12:13.183 ], 00:12:13.183 "driver_specific": {} 00:12:13.183 } 00:12:13.183 ] 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.183 BaseBdev3 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.183 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.183 [ 00:12:13.183 { 00:12:13.183 "name": "BaseBdev3", 00:12:13.184 "aliases": [ 00:12:13.184 "df222d3d-a0c7-4f16-95e2-e94534d955dc" 00:12:13.184 ], 00:12:13.184 "product_name": "Malloc disk", 00:12:13.184 "block_size": 512, 00:12:13.184 "num_blocks": 65536, 00:12:13.184 "uuid": "df222d3d-a0c7-4f16-95e2-e94534d955dc", 00:12:13.184 "assigned_rate_limits": { 00:12:13.184 "rw_ios_per_sec": 0, 00:12:13.184 "rw_mbytes_per_sec": 0, 00:12:13.184 "r_mbytes_per_sec": 0, 00:12:13.184 "w_mbytes_per_sec": 0 00:12:13.184 }, 00:12:13.184 "claimed": false, 00:12:13.184 "zoned": false, 00:12:13.184 "supported_io_types": { 00:12:13.184 "read": true, 00:12:13.184 "write": true, 00:12:13.184 "unmap": true, 00:12:13.184 "flush": true, 00:12:13.184 "reset": true, 00:12:13.184 "nvme_admin": false, 00:12:13.184 "nvme_io": false, 00:12:13.184 "nvme_io_md": false, 00:12:13.184 "write_zeroes": true, 00:12:13.184 "zcopy": true, 00:12:13.184 "get_zone_info": false, 00:12:13.184 "zone_management": false, 00:12:13.184 "zone_append": false, 00:12:13.184 "compare": false, 00:12:13.184 "compare_and_write": false, 00:12:13.184 "abort": true, 00:12:13.184 "seek_hole": false, 00:12:13.184 "seek_data": false, 00:12:13.184 "copy": true, 00:12:13.184 "nvme_iov_md": false 00:12:13.184 }, 00:12:13.184 "memory_domains": [ 00:12:13.184 { 00:12:13.184 "dma_device_id": "system", 00:12:13.184 "dma_device_type": 1 00:12:13.184 }, 00:12:13.184 { 00:12:13.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.184 "dma_device_type": 2 00:12:13.184 } 00:12:13.184 ], 00:12:13.184 "driver_specific": {} 00:12:13.184 } 00:12:13.184 ] 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.184 BaseBdev4 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.184 [ 00:12:13.184 { 00:12:13.184 "name": "BaseBdev4", 00:12:13.184 "aliases": [ 00:12:13.184 "79164077-0623-4c5c-8ae1-c6a72410a54c" 00:12:13.184 ], 00:12:13.184 "product_name": "Malloc disk", 00:12:13.184 "block_size": 512, 00:12:13.184 "num_blocks": 65536, 00:12:13.184 "uuid": "79164077-0623-4c5c-8ae1-c6a72410a54c", 00:12:13.184 "assigned_rate_limits": { 00:12:13.184 "rw_ios_per_sec": 0, 00:12:13.184 "rw_mbytes_per_sec": 0, 00:12:13.184 "r_mbytes_per_sec": 0, 00:12:13.184 "w_mbytes_per_sec": 0 00:12:13.184 }, 00:12:13.184 "claimed": false, 00:12:13.184 "zoned": false, 00:12:13.184 "supported_io_types": { 00:12:13.184 "read": true, 00:12:13.184 "write": true, 00:12:13.184 "unmap": true, 00:12:13.184 "flush": true, 00:12:13.184 "reset": true, 00:12:13.184 "nvme_admin": false, 00:12:13.184 "nvme_io": false, 00:12:13.184 "nvme_io_md": false, 00:12:13.184 "write_zeroes": true, 00:12:13.184 "zcopy": true, 00:12:13.184 "get_zone_info": false, 00:12:13.184 "zone_management": false, 00:12:13.184 "zone_append": false, 00:12:13.184 "compare": false, 00:12:13.184 "compare_and_write": false, 00:12:13.184 "abort": true, 00:12:13.184 "seek_hole": false, 00:12:13.184 "seek_data": false, 00:12:13.184 "copy": true, 00:12:13.184 "nvme_iov_md": false 00:12:13.184 }, 00:12:13.184 "memory_domains": [ 00:12:13.184 { 00:12:13.184 "dma_device_id": "system", 00:12:13.184 "dma_device_type": 1 00:12:13.184 }, 00:12:13.184 { 00:12:13.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.184 "dma_device_type": 2 00:12:13.184 } 00:12:13.184 ], 00:12:13.184 "driver_specific": {} 00:12:13.184 } 00:12:13.184 ] 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.184 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.184 [2024-11-26 17:56:54.984825] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:13.184 [2024-11-26 17:56:54.984955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:13.184 [2024-11-26 17:56:54.984991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.184 [2024-11-26 17:56:54.987299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.185 [2024-11-26 17:56:54.987411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:13.185 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.185 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.185 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.185 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.185 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.185 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.185 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.185 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.185 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.185 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.185 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.185 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.185 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.185 17:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.185 17:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.185 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.185 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.185 "name": "Existed_Raid", 00:12:13.185 "uuid": "ae0f2efc-cf3a-4022-ae94-ad138c8a00db", 00:12:13.185 "strip_size_kb": 0, 00:12:13.185 "state": "configuring", 00:12:13.185 "raid_level": "raid1", 00:12:13.185 "superblock": true, 00:12:13.185 "num_base_bdevs": 4, 00:12:13.185 "num_base_bdevs_discovered": 3, 00:12:13.185 "num_base_bdevs_operational": 4, 00:12:13.185 "base_bdevs_list": [ 00:12:13.185 { 00:12:13.185 "name": "BaseBdev1", 00:12:13.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.185 "is_configured": false, 00:12:13.185 "data_offset": 0, 00:12:13.185 "data_size": 0 00:12:13.185 }, 00:12:13.185 { 00:12:13.185 "name": "BaseBdev2", 00:12:13.185 "uuid": "b77982a7-02d0-47d4-9021-3dc1a1a1e4b4", 00:12:13.185 "is_configured": true, 00:12:13.185 "data_offset": 2048, 00:12:13.185 "data_size": 63488 00:12:13.185 }, 00:12:13.185 { 00:12:13.185 "name": "BaseBdev3", 00:12:13.185 "uuid": "df222d3d-a0c7-4f16-95e2-e94534d955dc", 00:12:13.185 "is_configured": true, 00:12:13.185 "data_offset": 2048, 00:12:13.185 "data_size": 63488 00:12:13.185 }, 00:12:13.185 { 00:12:13.185 "name": "BaseBdev4", 00:12:13.185 "uuid": "79164077-0623-4c5c-8ae1-c6a72410a54c", 00:12:13.185 "is_configured": true, 00:12:13.185 "data_offset": 2048, 00:12:13.185 "data_size": 63488 00:12:13.185 } 00:12:13.185 ] 00:12:13.185 }' 00:12:13.185 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.185 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.752 [2024-11-26 17:56:55.452147] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.752 "name": "Existed_Raid", 00:12:13.752 "uuid": "ae0f2efc-cf3a-4022-ae94-ad138c8a00db", 00:12:13.752 "strip_size_kb": 0, 00:12:13.752 "state": "configuring", 00:12:13.752 "raid_level": "raid1", 00:12:13.752 "superblock": true, 00:12:13.752 "num_base_bdevs": 4, 00:12:13.752 "num_base_bdevs_discovered": 2, 00:12:13.752 "num_base_bdevs_operational": 4, 00:12:13.752 "base_bdevs_list": [ 00:12:13.752 { 00:12:13.752 "name": "BaseBdev1", 00:12:13.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.752 "is_configured": false, 00:12:13.752 "data_offset": 0, 00:12:13.752 "data_size": 0 00:12:13.752 }, 00:12:13.752 { 00:12:13.752 "name": null, 00:12:13.752 "uuid": "b77982a7-02d0-47d4-9021-3dc1a1a1e4b4", 00:12:13.752 "is_configured": false, 00:12:13.752 "data_offset": 0, 00:12:13.752 "data_size": 63488 00:12:13.752 }, 00:12:13.752 { 00:12:13.752 "name": "BaseBdev3", 00:12:13.752 "uuid": "df222d3d-a0c7-4f16-95e2-e94534d955dc", 00:12:13.752 "is_configured": true, 00:12:13.752 "data_offset": 2048, 00:12:13.752 "data_size": 63488 00:12:13.752 }, 00:12:13.752 { 00:12:13.752 "name": "BaseBdev4", 00:12:13.752 "uuid": "79164077-0623-4c5c-8ae1-c6a72410a54c", 00:12:13.752 "is_configured": true, 00:12:13.752 "data_offset": 2048, 00:12:13.752 "data_size": 63488 00:12:13.752 } 00:12:13.752 ] 00:12:13.752 }' 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.752 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.321 [2024-11-26 17:56:55.995646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.321 BaseBdev1 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.321 17:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.321 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.321 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:14.321 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.322 [ 00:12:14.322 { 00:12:14.322 "name": "BaseBdev1", 00:12:14.322 "aliases": [ 00:12:14.322 "8d3caf58-3864-4078-9467-c62e48cbd8bc" 00:12:14.322 ], 00:12:14.322 "product_name": "Malloc disk", 00:12:14.322 "block_size": 512, 00:12:14.322 "num_blocks": 65536, 00:12:14.322 "uuid": "8d3caf58-3864-4078-9467-c62e48cbd8bc", 00:12:14.322 "assigned_rate_limits": { 00:12:14.322 "rw_ios_per_sec": 0, 00:12:14.322 "rw_mbytes_per_sec": 0, 00:12:14.322 "r_mbytes_per_sec": 0, 00:12:14.322 "w_mbytes_per_sec": 0 00:12:14.322 }, 00:12:14.322 "claimed": true, 00:12:14.322 "claim_type": "exclusive_write", 00:12:14.322 "zoned": false, 00:12:14.322 "supported_io_types": { 00:12:14.322 "read": true, 00:12:14.322 "write": true, 00:12:14.322 "unmap": true, 00:12:14.322 "flush": true, 00:12:14.322 "reset": true, 00:12:14.322 "nvme_admin": false, 00:12:14.322 "nvme_io": false, 00:12:14.322 "nvme_io_md": false, 00:12:14.322 "write_zeroes": true, 00:12:14.322 "zcopy": true, 00:12:14.322 "get_zone_info": false, 00:12:14.322 "zone_management": false, 00:12:14.322 "zone_append": false, 00:12:14.322 "compare": false, 00:12:14.322 "compare_and_write": false, 00:12:14.322 "abort": true, 00:12:14.322 "seek_hole": false, 00:12:14.322 "seek_data": false, 00:12:14.322 "copy": true, 00:12:14.322 "nvme_iov_md": false 00:12:14.322 }, 00:12:14.322 "memory_domains": [ 00:12:14.322 { 00:12:14.322 "dma_device_id": "system", 00:12:14.322 "dma_device_type": 1 00:12:14.322 }, 00:12:14.322 { 00:12:14.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.322 "dma_device_type": 2 00:12:14.322 } 00:12:14.322 ], 00:12:14.322 "driver_specific": {} 00:12:14.322 } 00:12:14.322 ] 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.322 "name": "Existed_Raid", 00:12:14.322 "uuid": "ae0f2efc-cf3a-4022-ae94-ad138c8a00db", 00:12:14.322 "strip_size_kb": 0, 00:12:14.322 "state": "configuring", 00:12:14.322 "raid_level": "raid1", 00:12:14.322 "superblock": true, 00:12:14.322 "num_base_bdevs": 4, 00:12:14.322 "num_base_bdevs_discovered": 3, 00:12:14.322 "num_base_bdevs_operational": 4, 00:12:14.322 "base_bdevs_list": [ 00:12:14.322 { 00:12:14.322 "name": "BaseBdev1", 00:12:14.322 "uuid": "8d3caf58-3864-4078-9467-c62e48cbd8bc", 00:12:14.322 "is_configured": true, 00:12:14.322 "data_offset": 2048, 00:12:14.322 "data_size": 63488 00:12:14.322 }, 00:12:14.322 { 00:12:14.322 "name": null, 00:12:14.322 "uuid": "b77982a7-02d0-47d4-9021-3dc1a1a1e4b4", 00:12:14.322 "is_configured": false, 00:12:14.322 "data_offset": 0, 00:12:14.322 "data_size": 63488 00:12:14.322 }, 00:12:14.322 { 00:12:14.322 "name": "BaseBdev3", 00:12:14.322 "uuid": "df222d3d-a0c7-4f16-95e2-e94534d955dc", 00:12:14.322 "is_configured": true, 00:12:14.322 "data_offset": 2048, 00:12:14.322 "data_size": 63488 00:12:14.322 }, 00:12:14.322 { 00:12:14.322 "name": "BaseBdev4", 00:12:14.322 "uuid": "79164077-0623-4c5c-8ae1-c6a72410a54c", 00:12:14.322 "is_configured": true, 00:12:14.322 "data_offset": 2048, 00:12:14.322 "data_size": 63488 00:12:14.322 } 00:12:14.322 ] 00:12:14.322 }' 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.322 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.607 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:14.607 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.607 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.607 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.607 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.866 [2024-11-26 17:56:56.479077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.866 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.866 "name": "Existed_Raid", 00:12:14.866 "uuid": "ae0f2efc-cf3a-4022-ae94-ad138c8a00db", 00:12:14.866 "strip_size_kb": 0, 00:12:14.866 "state": "configuring", 00:12:14.866 "raid_level": "raid1", 00:12:14.866 "superblock": true, 00:12:14.866 "num_base_bdevs": 4, 00:12:14.866 "num_base_bdevs_discovered": 2, 00:12:14.866 "num_base_bdevs_operational": 4, 00:12:14.866 "base_bdevs_list": [ 00:12:14.866 { 00:12:14.866 "name": "BaseBdev1", 00:12:14.866 "uuid": "8d3caf58-3864-4078-9467-c62e48cbd8bc", 00:12:14.866 "is_configured": true, 00:12:14.866 "data_offset": 2048, 00:12:14.866 "data_size": 63488 00:12:14.866 }, 00:12:14.866 { 00:12:14.866 "name": null, 00:12:14.866 "uuid": "b77982a7-02d0-47d4-9021-3dc1a1a1e4b4", 00:12:14.866 "is_configured": false, 00:12:14.866 "data_offset": 0, 00:12:14.866 "data_size": 63488 00:12:14.866 }, 00:12:14.866 { 00:12:14.866 "name": null, 00:12:14.866 "uuid": "df222d3d-a0c7-4f16-95e2-e94534d955dc", 00:12:14.866 "is_configured": false, 00:12:14.866 "data_offset": 0, 00:12:14.866 "data_size": 63488 00:12:14.866 }, 00:12:14.866 { 00:12:14.866 "name": "BaseBdev4", 00:12:14.866 "uuid": "79164077-0623-4c5c-8ae1-c6a72410a54c", 00:12:14.867 "is_configured": true, 00:12:14.867 "data_offset": 2048, 00:12:14.867 "data_size": 63488 00:12:14.867 } 00:12:14.867 ] 00:12:14.867 }' 00:12:14.867 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.867 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.125 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.125 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:15.125 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.125 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.125 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.385 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:15.385 17:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:15.385 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.385 17:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.385 [2024-11-26 17:56:57.002211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.385 17:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.385 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.385 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.385 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.385 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.385 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.385 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.385 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.385 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.385 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.385 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.386 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.386 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.386 17:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.386 17:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.386 17:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.386 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.386 "name": "Existed_Raid", 00:12:15.386 "uuid": "ae0f2efc-cf3a-4022-ae94-ad138c8a00db", 00:12:15.386 "strip_size_kb": 0, 00:12:15.386 "state": "configuring", 00:12:15.386 "raid_level": "raid1", 00:12:15.386 "superblock": true, 00:12:15.386 "num_base_bdevs": 4, 00:12:15.386 "num_base_bdevs_discovered": 3, 00:12:15.386 "num_base_bdevs_operational": 4, 00:12:15.386 "base_bdevs_list": [ 00:12:15.386 { 00:12:15.386 "name": "BaseBdev1", 00:12:15.386 "uuid": "8d3caf58-3864-4078-9467-c62e48cbd8bc", 00:12:15.386 "is_configured": true, 00:12:15.386 "data_offset": 2048, 00:12:15.386 "data_size": 63488 00:12:15.386 }, 00:12:15.386 { 00:12:15.386 "name": null, 00:12:15.386 "uuid": "b77982a7-02d0-47d4-9021-3dc1a1a1e4b4", 00:12:15.386 "is_configured": false, 00:12:15.386 "data_offset": 0, 00:12:15.386 "data_size": 63488 00:12:15.386 }, 00:12:15.386 { 00:12:15.386 "name": "BaseBdev3", 00:12:15.386 "uuid": "df222d3d-a0c7-4f16-95e2-e94534d955dc", 00:12:15.386 "is_configured": true, 00:12:15.386 "data_offset": 2048, 00:12:15.386 "data_size": 63488 00:12:15.386 }, 00:12:15.386 { 00:12:15.386 "name": "BaseBdev4", 00:12:15.386 "uuid": "79164077-0623-4c5c-8ae1-c6a72410a54c", 00:12:15.386 "is_configured": true, 00:12:15.386 "data_offset": 2048, 00:12:15.386 "data_size": 63488 00:12:15.386 } 00:12:15.386 ] 00:12:15.386 }' 00:12:15.386 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.386 17:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.645 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.645 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:15.645 17:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.645 17:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.904 [2024-11-26 17:56:57.538259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.904 17:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.905 17:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.905 17:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.905 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.905 "name": "Existed_Raid", 00:12:15.905 "uuid": "ae0f2efc-cf3a-4022-ae94-ad138c8a00db", 00:12:15.905 "strip_size_kb": 0, 00:12:15.905 "state": "configuring", 00:12:15.905 "raid_level": "raid1", 00:12:15.905 "superblock": true, 00:12:15.905 "num_base_bdevs": 4, 00:12:15.905 "num_base_bdevs_discovered": 2, 00:12:15.905 "num_base_bdevs_operational": 4, 00:12:15.905 "base_bdevs_list": [ 00:12:15.905 { 00:12:15.905 "name": null, 00:12:15.905 "uuid": "8d3caf58-3864-4078-9467-c62e48cbd8bc", 00:12:15.905 "is_configured": false, 00:12:15.905 "data_offset": 0, 00:12:15.905 "data_size": 63488 00:12:15.905 }, 00:12:15.905 { 00:12:15.905 "name": null, 00:12:15.905 "uuid": "b77982a7-02d0-47d4-9021-3dc1a1a1e4b4", 00:12:15.905 "is_configured": false, 00:12:15.905 "data_offset": 0, 00:12:15.905 "data_size": 63488 00:12:15.905 }, 00:12:15.905 { 00:12:15.905 "name": "BaseBdev3", 00:12:15.905 "uuid": "df222d3d-a0c7-4f16-95e2-e94534d955dc", 00:12:15.905 "is_configured": true, 00:12:15.905 "data_offset": 2048, 00:12:15.905 "data_size": 63488 00:12:15.905 }, 00:12:15.905 { 00:12:15.905 "name": "BaseBdev4", 00:12:15.905 "uuid": "79164077-0623-4c5c-8ae1-c6a72410a54c", 00:12:15.905 "is_configured": true, 00:12:15.905 "data_offset": 2048, 00:12:15.905 "data_size": 63488 00:12:15.905 } 00:12:15.905 ] 00:12:15.905 }' 00:12:15.905 17:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.905 17:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.473 [2024-11-26 17:56:58.179247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.473 "name": "Existed_Raid", 00:12:16.473 "uuid": "ae0f2efc-cf3a-4022-ae94-ad138c8a00db", 00:12:16.473 "strip_size_kb": 0, 00:12:16.473 "state": "configuring", 00:12:16.473 "raid_level": "raid1", 00:12:16.473 "superblock": true, 00:12:16.473 "num_base_bdevs": 4, 00:12:16.473 "num_base_bdevs_discovered": 3, 00:12:16.473 "num_base_bdevs_operational": 4, 00:12:16.473 "base_bdevs_list": [ 00:12:16.473 { 00:12:16.473 "name": null, 00:12:16.473 "uuid": "8d3caf58-3864-4078-9467-c62e48cbd8bc", 00:12:16.473 "is_configured": false, 00:12:16.473 "data_offset": 0, 00:12:16.473 "data_size": 63488 00:12:16.473 }, 00:12:16.473 { 00:12:16.473 "name": "BaseBdev2", 00:12:16.473 "uuid": "b77982a7-02d0-47d4-9021-3dc1a1a1e4b4", 00:12:16.473 "is_configured": true, 00:12:16.473 "data_offset": 2048, 00:12:16.473 "data_size": 63488 00:12:16.473 }, 00:12:16.473 { 00:12:16.473 "name": "BaseBdev3", 00:12:16.473 "uuid": "df222d3d-a0c7-4f16-95e2-e94534d955dc", 00:12:16.473 "is_configured": true, 00:12:16.473 "data_offset": 2048, 00:12:16.473 "data_size": 63488 00:12:16.473 }, 00:12:16.473 { 00:12:16.473 "name": "BaseBdev4", 00:12:16.473 "uuid": "79164077-0623-4c5c-8ae1-c6a72410a54c", 00:12:16.473 "is_configured": true, 00:12:16.473 "data_offset": 2048, 00:12:16.473 "data_size": 63488 00:12:16.473 } 00:12:16.473 ] 00:12:16.473 }' 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.473 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8d3caf58-3864-4078-9467-c62e48cbd8bc 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.041 [2024-11-26 17:56:58.801618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:17.041 [2024-11-26 17:56:58.801913] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:17.041 [2024-11-26 17:56:58.801933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:17.041 [2024-11-26 17:56:58.802252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:17.041 NewBaseBdev 00:12:17.041 [2024-11-26 17:56:58.802452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:17.041 [2024-11-26 17:56:58.802465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:17.041 [2024-11-26 17:56:58.802616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.041 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.041 [ 00:12:17.041 { 00:12:17.041 "name": "NewBaseBdev", 00:12:17.041 "aliases": [ 00:12:17.041 "8d3caf58-3864-4078-9467-c62e48cbd8bc" 00:12:17.041 ], 00:12:17.041 "product_name": "Malloc disk", 00:12:17.041 "block_size": 512, 00:12:17.041 "num_blocks": 65536, 00:12:17.041 "uuid": "8d3caf58-3864-4078-9467-c62e48cbd8bc", 00:12:17.041 "assigned_rate_limits": { 00:12:17.041 "rw_ios_per_sec": 0, 00:12:17.041 "rw_mbytes_per_sec": 0, 00:12:17.041 "r_mbytes_per_sec": 0, 00:12:17.041 "w_mbytes_per_sec": 0 00:12:17.041 }, 00:12:17.041 "claimed": true, 00:12:17.041 "claim_type": "exclusive_write", 00:12:17.041 "zoned": false, 00:12:17.041 "supported_io_types": { 00:12:17.041 "read": true, 00:12:17.041 "write": true, 00:12:17.041 "unmap": true, 00:12:17.041 "flush": true, 00:12:17.041 "reset": true, 00:12:17.041 "nvme_admin": false, 00:12:17.041 "nvme_io": false, 00:12:17.041 "nvme_io_md": false, 00:12:17.041 "write_zeroes": true, 00:12:17.041 "zcopy": true, 00:12:17.041 "get_zone_info": false, 00:12:17.041 "zone_management": false, 00:12:17.041 "zone_append": false, 00:12:17.041 "compare": false, 00:12:17.041 "compare_and_write": false, 00:12:17.041 "abort": true, 00:12:17.041 "seek_hole": false, 00:12:17.041 "seek_data": false, 00:12:17.041 "copy": true, 00:12:17.041 "nvme_iov_md": false 00:12:17.042 }, 00:12:17.042 "memory_domains": [ 00:12:17.042 { 00:12:17.042 "dma_device_id": "system", 00:12:17.042 "dma_device_type": 1 00:12:17.042 }, 00:12:17.042 { 00:12:17.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.042 "dma_device_type": 2 00:12:17.042 } 00:12:17.042 ], 00:12:17.042 "driver_specific": {} 00:12:17.042 } 00:12:17.042 ] 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.042 "name": "Existed_Raid", 00:12:17.042 "uuid": "ae0f2efc-cf3a-4022-ae94-ad138c8a00db", 00:12:17.042 "strip_size_kb": 0, 00:12:17.042 "state": "online", 00:12:17.042 "raid_level": "raid1", 00:12:17.042 "superblock": true, 00:12:17.042 "num_base_bdevs": 4, 00:12:17.042 "num_base_bdevs_discovered": 4, 00:12:17.042 "num_base_bdevs_operational": 4, 00:12:17.042 "base_bdevs_list": [ 00:12:17.042 { 00:12:17.042 "name": "NewBaseBdev", 00:12:17.042 "uuid": "8d3caf58-3864-4078-9467-c62e48cbd8bc", 00:12:17.042 "is_configured": true, 00:12:17.042 "data_offset": 2048, 00:12:17.042 "data_size": 63488 00:12:17.042 }, 00:12:17.042 { 00:12:17.042 "name": "BaseBdev2", 00:12:17.042 "uuid": "b77982a7-02d0-47d4-9021-3dc1a1a1e4b4", 00:12:17.042 "is_configured": true, 00:12:17.042 "data_offset": 2048, 00:12:17.042 "data_size": 63488 00:12:17.042 }, 00:12:17.042 { 00:12:17.042 "name": "BaseBdev3", 00:12:17.042 "uuid": "df222d3d-a0c7-4f16-95e2-e94534d955dc", 00:12:17.042 "is_configured": true, 00:12:17.042 "data_offset": 2048, 00:12:17.042 "data_size": 63488 00:12:17.042 }, 00:12:17.042 { 00:12:17.042 "name": "BaseBdev4", 00:12:17.042 "uuid": "79164077-0623-4c5c-8ae1-c6a72410a54c", 00:12:17.042 "is_configured": true, 00:12:17.042 "data_offset": 2048, 00:12:17.042 "data_size": 63488 00:12:17.042 } 00:12:17.042 ] 00:12:17.042 }' 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.042 17:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.609 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:17.609 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:17.609 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:17.609 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:17.609 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:17.609 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:17.609 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:17.609 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:17.609 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.609 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.609 [2024-11-26 17:56:59.321759] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.609 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.609 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:17.609 "name": "Existed_Raid", 00:12:17.609 "aliases": [ 00:12:17.609 "ae0f2efc-cf3a-4022-ae94-ad138c8a00db" 00:12:17.609 ], 00:12:17.609 "product_name": "Raid Volume", 00:12:17.609 "block_size": 512, 00:12:17.609 "num_blocks": 63488, 00:12:17.609 "uuid": "ae0f2efc-cf3a-4022-ae94-ad138c8a00db", 00:12:17.609 "assigned_rate_limits": { 00:12:17.609 "rw_ios_per_sec": 0, 00:12:17.609 "rw_mbytes_per_sec": 0, 00:12:17.609 "r_mbytes_per_sec": 0, 00:12:17.609 "w_mbytes_per_sec": 0 00:12:17.609 }, 00:12:17.609 "claimed": false, 00:12:17.609 "zoned": false, 00:12:17.609 "supported_io_types": { 00:12:17.609 "read": true, 00:12:17.609 "write": true, 00:12:17.610 "unmap": false, 00:12:17.610 "flush": false, 00:12:17.610 "reset": true, 00:12:17.610 "nvme_admin": false, 00:12:17.610 "nvme_io": false, 00:12:17.610 "nvme_io_md": false, 00:12:17.610 "write_zeroes": true, 00:12:17.610 "zcopy": false, 00:12:17.610 "get_zone_info": false, 00:12:17.610 "zone_management": false, 00:12:17.610 "zone_append": false, 00:12:17.610 "compare": false, 00:12:17.610 "compare_and_write": false, 00:12:17.610 "abort": false, 00:12:17.610 "seek_hole": false, 00:12:17.610 "seek_data": false, 00:12:17.610 "copy": false, 00:12:17.610 "nvme_iov_md": false 00:12:17.610 }, 00:12:17.610 "memory_domains": [ 00:12:17.610 { 00:12:17.610 "dma_device_id": "system", 00:12:17.610 "dma_device_type": 1 00:12:17.610 }, 00:12:17.610 { 00:12:17.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.610 "dma_device_type": 2 00:12:17.610 }, 00:12:17.610 { 00:12:17.610 "dma_device_id": "system", 00:12:17.610 "dma_device_type": 1 00:12:17.610 }, 00:12:17.610 { 00:12:17.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.610 "dma_device_type": 2 00:12:17.610 }, 00:12:17.610 { 00:12:17.610 "dma_device_id": "system", 00:12:17.610 "dma_device_type": 1 00:12:17.610 }, 00:12:17.610 { 00:12:17.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.610 "dma_device_type": 2 00:12:17.610 }, 00:12:17.610 { 00:12:17.610 "dma_device_id": "system", 00:12:17.610 "dma_device_type": 1 00:12:17.610 }, 00:12:17.610 { 00:12:17.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.610 "dma_device_type": 2 00:12:17.610 } 00:12:17.610 ], 00:12:17.610 "driver_specific": { 00:12:17.610 "raid": { 00:12:17.610 "uuid": "ae0f2efc-cf3a-4022-ae94-ad138c8a00db", 00:12:17.610 "strip_size_kb": 0, 00:12:17.610 "state": "online", 00:12:17.610 "raid_level": "raid1", 00:12:17.610 "superblock": true, 00:12:17.610 "num_base_bdevs": 4, 00:12:17.610 "num_base_bdevs_discovered": 4, 00:12:17.610 "num_base_bdevs_operational": 4, 00:12:17.610 "base_bdevs_list": [ 00:12:17.610 { 00:12:17.610 "name": "NewBaseBdev", 00:12:17.610 "uuid": "8d3caf58-3864-4078-9467-c62e48cbd8bc", 00:12:17.610 "is_configured": true, 00:12:17.610 "data_offset": 2048, 00:12:17.610 "data_size": 63488 00:12:17.610 }, 00:12:17.610 { 00:12:17.610 "name": "BaseBdev2", 00:12:17.610 "uuid": "b77982a7-02d0-47d4-9021-3dc1a1a1e4b4", 00:12:17.610 "is_configured": true, 00:12:17.610 "data_offset": 2048, 00:12:17.610 "data_size": 63488 00:12:17.610 }, 00:12:17.610 { 00:12:17.610 "name": "BaseBdev3", 00:12:17.610 "uuid": "df222d3d-a0c7-4f16-95e2-e94534d955dc", 00:12:17.610 "is_configured": true, 00:12:17.610 "data_offset": 2048, 00:12:17.610 "data_size": 63488 00:12:17.610 }, 00:12:17.610 { 00:12:17.610 "name": "BaseBdev4", 00:12:17.610 "uuid": "79164077-0623-4c5c-8ae1-c6a72410a54c", 00:12:17.610 "is_configured": true, 00:12:17.610 "data_offset": 2048, 00:12:17.610 "data_size": 63488 00:12:17.610 } 00:12:17.610 ] 00:12:17.610 } 00:12:17.610 } 00:12:17.610 }' 00:12:17.610 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.610 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:17.610 BaseBdev2 00:12:17.610 BaseBdev3 00:12:17.610 BaseBdev4' 00:12:17.610 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.610 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:17.610 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.610 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:17.610 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.610 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.610 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.869 [2024-11-26 17:56:59.653331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:17.869 [2024-11-26 17:56:59.653450] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.869 [2024-11-26 17:56:59.653589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.869 [2024-11-26 17:56:59.653970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.869 [2024-11-26 17:56:59.654060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74163 00:12:17.869 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74163 ']' 00:12:17.870 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74163 00:12:17.870 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:17.870 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.870 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74163 00:12:17.870 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.870 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.870 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74163' 00:12:17.870 killing process with pid 74163 00:12:17.870 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74163 00:12:17.870 [2024-11-26 17:56:59.703600] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.870 17:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74163 00:12:18.438 [2024-11-26 17:57:00.191942] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.834 17:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:19.834 00:12:19.834 real 0m12.519s 00:12:19.834 user 0m19.690s 00:12:19.834 sys 0m2.095s 00:12:19.834 ************************************ 00:12:19.834 END TEST raid_state_function_test_sb 00:12:19.834 ************************************ 00:12:19.834 17:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.834 17:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.834 17:57:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:19.834 17:57:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:19.834 17:57:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.834 17:57:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.834 ************************************ 00:12:19.834 START TEST raid_superblock_test 00:12:19.834 ************************************ 00:12:19.834 17:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:19.834 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:19.834 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:19.834 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:19.834 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:19.834 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:19.834 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:19.834 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:19.834 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:19.834 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:19.835 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:19.835 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:19.835 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:19.835 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:19.835 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:19.835 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:19.835 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74839 00:12:19.835 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74839 00:12:19.835 17:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:19.835 17:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74839 ']' 00:12:19.835 17:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.835 17:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.835 17:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.835 17:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.835 17:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.095 [2024-11-26 17:57:01.758636] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:12:20.095 [2024-11-26 17:57:01.758775] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74839 ] 00:12:20.095 [2024-11-26 17:57:01.937752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.355 [2024-11-26 17:57:02.076586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.615 [2024-11-26 17:57:02.318991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.615 [2024-11-26 17:57:02.319176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.875 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.875 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:20.875 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:20.875 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:20.875 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:20.875 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:20.875 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:20.875 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:20.875 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:20.875 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:20.875 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:20.875 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.875 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.165 malloc1 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.165 [2024-11-26 17:57:02.746770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:21.165 [2024-11-26 17:57:02.746944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.165 [2024-11-26 17:57:02.747030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:21.165 [2024-11-26 17:57:02.747076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.165 [2024-11-26 17:57:02.749765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.165 [2024-11-26 17:57:02.749874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:21.165 pt1 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.165 malloc2 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.165 [2024-11-26 17:57:02.812897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:21.165 [2024-11-26 17:57:02.812990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.165 [2024-11-26 17:57:02.813041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:21.165 [2024-11-26 17:57:02.813054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.165 [2024-11-26 17:57:02.815662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.165 [2024-11-26 17:57:02.815717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:21.165 pt2 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.165 malloc3 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.165 [2024-11-26 17:57:02.891703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:21.165 [2024-11-26 17:57:02.891849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.165 [2024-11-26 17:57:02.891905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:21.165 [2024-11-26 17:57:02.891941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.165 [2024-11-26 17:57:02.894562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.165 [2024-11-26 17:57:02.894671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:21.165 pt3 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:21.165 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.166 malloc4 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.166 [2024-11-26 17:57:02.958399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:21.166 [2024-11-26 17:57:02.958566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.166 [2024-11-26 17:57:02.958631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:21.166 [2024-11-26 17:57:02.958667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.166 [2024-11-26 17:57:02.961336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.166 [2024-11-26 17:57:02.961445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:21.166 pt4 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.166 [2024-11-26 17:57:02.974503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:21.166 [2024-11-26 17:57:02.976795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:21.166 [2024-11-26 17:57:02.976935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:21.166 [2024-11-26 17:57:02.977060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:21.166 [2024-11-26 17:57:02.977360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:21.166 [2024-11-26 17:57:02.977422] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:21.166 [2024-11-26 17:57:02.977792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:21.166 [2024-11-26 17:57:02.978071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:21.166 [2024-11-26 17:57:02.978131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:21.166 [2024-11-26 17:57:02.978428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.166 17:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.166 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.423 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.423 "name": "raid_bdev1", 00:12:21.423 "uuid": "1394334d-2e60-4941-a4d6-cf1eceddcf4f", 00:12:21.423 "strip_size_kb": 0, 00:12:21.423 "state": "online", 00:12:21.423 "raid_level": "raid1", 00:12:21.423 "superblock": true, 00:12:21.423 "num_base_bdevs": 4, 00:12:21.423 "num_base_bdevs_discovered": 4, 00:12:21.423 "num_base_bdevs_operational": 4, 00:12:21.423 "base_bdevs_list": [ 00:12:21.423 { 00:12:21.423 "name": "pt1", 00:12:21.423 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:21.423 "is_configured": true, 00:12:21.423 "data_offset": 2048, 00:12:21.423 "data_size": 63488 00:12:21.423 }, 00:12:21.423 { 00:12:21.423 "name": "pt2", 00:12:21.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:21.423 "is_configured": true, 00:12:21.423 "data_offset": 2048, 00:12:21.423 "data_size": 63488 00:12:21.423 }, 00:12:21.423 { 00:12:21.423 "name": "pt3", 00:12:21.423 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:21.423 "is_configured": true, 00:12:21.423 "data_offset": 2048, 00:12:21.423 "data_size": 63488 00:12:21.423 }, 00:12:21.423 { 00:12:21.423 "name": "pt4", 00:12:21.423 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:21.423 "is_configured": true, 00:12:21.423 "data_offset": 2048, 00:12:21.423 "data_size": 63488 00:12:21.423 } 00:12:21.423 ] 00:12:21.423 }' 00:12:21.423 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.423 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.681 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:21.681 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:21.681 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:21.681 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:21.681 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:21.681 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:21.681 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.681 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.681 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.681 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:21.681 [2024-11-26 17:57:03.466337] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.681 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.681 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:21.681 "name": "raid_bdev1", 00:12:21.681 "aliases": [ 00:12:21.681 "1394334d-2e60-4941-a4d6-cf1eceddcf4f" 00:12:21.681 ], 00:12:21.681 "product_name": "Raid Volume", 00:12:21.681 "block_size": 512, 00:12:21.681 "num_blocks": 63488, 00:12:21.681 "uuid": "1394334d-2e60-4941-a4d6-cf1eceddcf4f", 00:12:21.681 "assigned_rate_limits": { 00:12:21.681 "rw_ios_per_sec": 0, 00:12:21.681 "rw_mbytes_per_sec": 0, 00:12:21.681 "r_mbytes_per_sec": 0, 00:12:21.681 "w_mbytes_per_sec": 0 00:12:21.681 }, 00:12:21.681 "claimed": false, 00:12:21.681 "zoned": false, 00:12:21.681 "supported_io_types": { 00:12:21.681 "read": true, 00:12:21.681 "write": true, 00:12:21.681 "unmap": false, 00:12:21.681 "flush": false, 00:12:21.681 "reset": true, 00:12:21.681 "nvme_admin": false, 00:12:21.681 "nvme_io": false, 00:12:21.681 "nvme_io_md": false, 00:12:21.681 "write_zeroes": true, 00:12:21.681 "zcopy": false, 00:12:21.681 "get_zone_info": false, 00:12:21.681 "zone_management": false, 00:12:21.681 "zone_append": false, 00:12:21.681 "compare": false, 00:12:21.681 "compare_and_write": false, 00:12:21.681 "abort": false, 00:12:21.681 "seek_hole": false, 00:12:21.681 "seek_data": false, 00:12:21.681 "copy": false, 00:12:21.681 "nvme_iov_md": false 00:12:21.681 }, 00:12:21.681 "memory_domains": [ 00:12:21.681 { 00:12:21.681 "dma_device_id": "system", 00:12:21.681 "dma_device_type": 1 00:12:21.681 }, 00:12:21.681 { 00:12:21.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.681 "dma_device_type": 2 00:12:21.681 }, 00:12:21.681 { 00:12:21.681 "dma_device_id": "system", 00:12:21.681 "dma_device_type": 1 00:12:21.681 }, 00:12:21.681 { 00:12:21.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.682 "dma_device_type": 2 00:12:21.682 }, 00:12:21.682 { 00:12:21.682 "dma_device_id": "system", 00:12:21.682 "dma_device_type": 1 00:12:21.682 }, 00:12:21.682 { 00:12:21.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.682 "dma_device_type": 2 00:12:21.682 }, 00:12:21.682 { 00:12:21.682 "dma_device_id": "system", 00:12:21.682 "dma_device_type": 1 00:12:21.682 }, 00:12:21.682 { 00:12:21.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.682 "dma_device_type": 2 00:12:21.682 } 00:12:21.682 ], 00:12:21.682 "driver_specific": { 00:12:21.682 "raid": { 00:12:21.682 "uuid": "1394334d-2e60-4941-a4d6-cf1eceddcf4f", 00:12:21.682 "strip_size_kb": 0, 00:12:21.682 "state": "online", 00:12:21.682 "raid_level": "raid1", 00:12:21.682 "superblock": true, 00:12:21.682 "num_base_bdevs": 4, 00:12:21.682 "num_base_bdevs_discovered": 4, 00:12:21.682 "num_base_bdevs_operational": 4, 00:12:21.682 "base_bdevs_list": [ 00:12:21.682 { 00:12:21.682 "name": "pt1", 00:12:21.682 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:21.682 "is_configured": true, 00:12:21.682 "data_offset": 2048, 00:12:21.682 "data_size": 63488 00:12:21.682 }, 00:12:21.682 { 00:12:21.682 "name": "pt2", 00:12:21.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:21.682 "is_configured": true, 00:12:21.682 "data_offset": 2048, 00:12:21.682 "data_size": 63488 00:12:21.682 }, 00:12:21.682 { 00:12:21.682 "name": "pt3", 00:12:21.682 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:21.682 "is_configured": true, 00:12:21.682 "data_offset": 2048, 00:12:21.682 "data_size": 63488 00:12:21.682 }, 00:12:21.682 { 00:12:21.682 "name": "pt4", 00:12:21.682 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:21.682 "is_configured": true, 00:12:21.682 "data_offset": 2048, 00:12:21.682 "data_size": 63488 00:12:21.682 } 00:12:21.682 ] 00:12:21.682 } 00:12:21.682 } 00:12:21.682 }' 00:12:21.682 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:21.939 pt2 00:12:21.939 pt3 00:12:21.939 pt4' 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.939 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.940 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:21.940 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.940 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.199 [2024-11-26 17:57:03.801715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.199 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.199 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1394334d-2e60-4941-a4d6-cf1eceddcf4f 00:12:22.199 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1394334d-2e60-4941-a4d6-cf1eceddcf4f ']' 00:12:22.199 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:22.199 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.199 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.199 [2024-11-26 17:57:03.853354] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:22.199 [2024-11-26 17:57:03.853393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.199 [2024-11-26 17:57:03.853505] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.199 [2024-11-26 17:57:03.853606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.199 [2024-11-26 17:57:03.853624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:22.199 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.199 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.200 17:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.200 [2024-11-26 17:57:04.025235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:22.200 [2024-11-26 17:57:04.027486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:22.200 [2024-11-26 17:57:04.027617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:22.200 [2024-11-26 17:57:04.027704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:22.200 [2024-11-26 17:57:04.027810] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:22.200 [2024-11-26 17:57:04.027931] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:22.200 [2024-11-26 17:57:04.028013] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:22.200 [2024-11-26 17:57:04.028102] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:22.200 [2024-11-26 17:57:04.028165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:22.200 [2024-11-26 17:57:04.028210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:22.200 request: 00:12:22.200 { 00:12:22.200 "name": "raid_bdev1", 00:12:22.200 "raid_level": "raid1", 00:12:22.200 "base_bdevs": [ 00:12:22.200 "malloc1", 00:12:22.200 "malloc2", 00:12:22.200 "malloc3", 00:12:22.200 "malloc4" 00:12:22.200 ], 00:12:22.200 "superblock": false, 00:12:22.200 "method": "bdev_raid_create", 00:12:22.200 "req_id": 1 00:12:22.200 } 00:12:22.200 Got JSON-RPC error response 00:12:22.200 response: 00:12:22.200 { 00:12:22.200 "code": -17, 00:12:22.200 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:22.200 } 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.200 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.459 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:22.459 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:22.459 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:22.459 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.459 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.459 [2024-11-26 17:57:04.085216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:22.459 [2024-11-26 17:57:04.085377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.459 [2024-11-26 17:57:04.085438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:22.460 [2024-11-26 17:57:04.085475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.460 [2024-11-26 17:57:04.088004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.460 [2024-11-26 17:57:04.088120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:22.460 [2024-11-26 17:57:04.088266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:22.460 [2024-11-26 17:57:04.088368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:22.460 pt1 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.460 "name": "raid_bdev1", 00:12:22.460 "uuid": "1394334d-2e60-4941-a4d6-cf1eceddcf4f", 00:12:22.460 "strip_size_kb": 0, 00:12:22.460 "state": "configuring", 00:12:22.460 "raid_level": "raid1", 00:12:22.460 "superblock": true, 00:12:22.460 "num_base_bdevs": 4, 00:12:22.460 "num_base_bdevs_discovered": 1, 00:12:22.460 "num_base_bdevs_operational": 4, 00:12:22.460 "base_bdevs_list": [ 00:12:22.460 { 00:12:22.460 "name": "pt1", 00:12:22.460 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.460 "is_configured": true, 00:12:22.460 "data_offset": 2048, 00:12:22.460 "data_size": 63488 00:12:22.460 }, 00:12:22.460 { 00:12:22.460 "name": null, 00:12:22.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.460 "is_configured": false, 00:12:22.460 "data_offset": 2048, 00:12:22.460 "data_size": 63488 00:12:22.460 }, 00:12:22.460 { 00:12:22.460 "name": null, 00:12:22.460 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.460 "is_configured": false, 00:12:22.460 "data_offset": 2048, 00:12:22.460 "data_size": 63488 00:12:22.460 }, 00:12:22.460 { 00:12:22.460 "name": null, 00:12:22.460 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:22.460 "is_configured": false, 00:12:22.460 "data_offset": 2048, 00:12:22.460 "data_size": 63488 00:12:22.460 } 00:12:22.460 ] 00:12:22.460 }' 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.460 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.732 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.733 [2024-11-26 17:57:04.548586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:22.733 [2024-11-26 17:57:04.548757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.733 [2024-11-26 17:57:04.548820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:22.733 [2024-11-26 17:57:04.548860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.733 [2024-11-26 17:57:04.549496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.733 [2024-11-26 17:57:04.549569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:22.733 [2024-11-26 17:57:04.549703] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:22.733 [2024-11-26 17:57:04.549766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:22.733 pt2 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.733 [2024-11-26 17:57:04.560596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.733 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.992 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.992 "name": "raid_bdev1", 00:12:22.992 "uuid": "1394334d-2e60-4941-a4d6-cf1eceddcf4f", 00:12:22.992 "strip_size_kb": 0, 00:12:22.992 "state": "configuring", 00:12:22.992 "raid_level": "raid1", 00:12:22.992 "superblock": true, 00:12:22.992 "num_base_bdevs": 4, 00:12:22.992 "num_base_bdevs_discovered": 1, 00:12:22.992 "num_base_bdevs_operational": 4, 00:12:22.992 "base_bdevs_list": [ 00:12:22.992 { 00:12:22.992 "name": "pt1", 00:12:22.992 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.992 "is_configured": true, 00:12:22.992 "data_offset": 2048, 00:12:22.992 "data_size": 63488 00:12:22.992 }, 00:12:22.992 { 00:12:22.992 "name": null, 00:12:22.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.992 "is_configured": false, 00:12:22.992 "data_offset": 0, 00:12:22.992 "data_size": 63488 00:12:22.992 }, 00:12:22.992 { 00:12:22.992 "name": null, 00:12:22.992 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.992 "is_configured": false, 00:12:22.992 "data_offset": 2048, 00:12:22.992 "data_size": 63488 00:12:22.992 }, 00:12:22.992 { 00:12:22.992 "name": null, 00:12:22.992 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:22.992 "is_configured": false, 00:12:22.992 "data_offset": 2048, 00:12:22.992 "data_size": 63488 00:12:22.992 } 00:12:22.992 ] 00:12:22.992 }' 00:12:22.992 17:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.992 17:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.252 [2024-11-26 17:57:05.044040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:23.252 [2024-11-26 17:57:05.044126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.252 [2024-11-26 17:57:05.044151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:23.252 [2024-11-26 17:57:05.044162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.252 [2024-11-26 17:57:05.044700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.252 [2024-11-26 17:57:05.044721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:23.252 [2024-11-26 17:57:05.044820] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:23.252 [2024-11-26 17:57:05.044845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:23.252 pt2 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.252 [2024-11-26 17:57:05.056002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:23.252 [2024-11-26 17:57:05.056098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.252 [2024-11-26 17:57:05.056123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:23.252 [2024-11-26 17:57:05.056134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.252 [2024-11-26 17:57:05.056653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.252 [2024-11-26 17:57:05.056682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:23.252 [2024-11-26 17:57:05.056777] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:23.252 [2024-11-26 17:57:05.056802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:23.252 pt3 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.252 [2024-11-26 17:57:05.067946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:23.252 [2024-11-26 17:57:05.068100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.252 [2024-11-26 17:57:05.068132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:23.252 [2024-11-26 17:57:05.068143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.252 [2024-11-26 17:57:05.068662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.252 [2024-11-26 17:57:05.068691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:23.252 [2024-11-26 17:57:05.068785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:23.252 [2024-11-26 17:57:05.068835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:23.252 [2024-11-26 17:57:05.069007] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:23.252 [2024-11-26 17:57:05.069037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:23.252 [2024-11-26 17:57:05.069330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:23.252 [2024-11-26 17:57:05.069535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:23.252 [2024-11-26 17:57:05.069551] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:23.252 [2024-11-26 17:57:05.069718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.252 pt4 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.252 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.253 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.253 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.253 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.253 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.253 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.253 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.253 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.253 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.253 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.510 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.510 "name": "raid_bdev1", 00:12:23.510 "uuid": "1394334d-2e60-4941-a4d6-cf1eceddcf4f", 00:12:23.510 "strip_size_kb": 0, 00:12:23.510 "state": "online", 00:12:23.510 "raid_level": "raid1", 00:12:23.510 "superblock": true, 00:12:23.510 "num_base_bdevs": 4, 00:12:23.510 "num_base_bdevs_discovered": 4, 00:12:23.510 "num_base_bdevs_operational": 4, 00:12:23.510 "base_bdevs_list": [ 00:12:23.510 { 00:12:23.510 "name": "pt1", 00:12:23.510 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:23.510 "is_configured": true, 00:12:23.510 "data_offset": 2048, 00:12:23.510 "data_size": 63488 00:12:23.510 }, 00:12:23.510 { 00:12:23.510 "name": "pt2", 00:12:23.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.510 "is_configured": true, 00:12:23.510 "data_offset": 2048, 00:12:23.510 "data_size": 63488 00:12:23.510 }, 00:12:23.510 { 00:12:23.510 "name": "pt3", 00:12:23.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.510 "is_configured": true, 00:12:23.510 "data_offset": 2048, 00:12:23.510 "data_size": 63488 00:12:23.510 }, 00:12:23.510 { 00:12:23.510 "name": "pt4", 00:12:23.510 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:23.510 "is_configured": true, 00:12:23.510 "data_offset": 2048, 00:12:23.510 "data_size": 63488 00:12:23.510 } 00:12:23.510 ] 00:12:23.510 }' 00:12:23.510 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.510 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.769 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:23.769 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:23.769 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:23.769 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:23.769 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:23.769 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:23.769 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:23.769 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.769 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.769 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:23.769 [2024-11-26 17:57:05.527598] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.769 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.769 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:23.769 "name": "raid_bdev1", 00:12:23.769 "aliases": [ 00:12:23.769 "1394334d-2e60-4941-a4d6-cf1eceddcf4f" 00:12:23.769 ], 00:12:23.769 "product_name": "Raid Volume", 00:12:23.769 "block_size": 512, 00:12:23.770 "num_blocks": 63488, 00:12:23.770 "uuid": "1394334d-2e60-4941-a4d6-cf1eceddcf4f", 00:12:23.770 "assigned_rate_limits": { 00:12:23.770 "rw_ios_per_sec": 0, 00:12:23.770 "rw_mbytes_per_sec": 0, 00:12:23.770 "r_mbytes_per_sec": 0, 00:12:23.770 "w_mbytes_per_sec": 0 00:12:23.770 }, 00:12:23.770 "claimed": false, 00:12:23.770 "zoned": false, 00:12:23.770 "supported_io_types": { 00:12:23.770 "read": true, 00:12:23.770 "write": true, 00:12:23.770 "unmap": false, 00:12:23.770 "flush": false, 00:12:23.770 "reset": true, 00:12:23.770 "nvme_admin": false, 00:12:23.770 "nvme_io": false, 00:12:23.770 "nvme_io_md": false, 00:12:23.770 "write_zeroes": true, 00:12:23.770 "zcopy": false, 00:12:23.770 "get_zone_info": false, 00:12:23.770 "zone_management": false, 00:12:23.770 "zone_append": false, 00:12:23.770 "compare": false, 00:12:23.770 "compare_and_write": false, 00:12:23.770 "abort": false, 00:12:23.770 "seek_hole": false, 00:12:23.770 "seek_data": false, 00:12:23.770 "copy": false, 00:12:23.770 "nvme_iov_md": false 00:12:23.770 }, 00:12:23.770 "memory_domains": [ 00:12:23.770 { 00:12:23.770 "dma_device_id": "system", 00:12:23.770 "dma_device_type": 1 00:12:23.770 }, 00:12:23.770 { 00:12:23.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.770 "dma_device_type": 2 00:12:23.770 }, 00:12:23.770 { 00:12:23.770 "dma_device_id": "system", 00:12:23.770 "dma_device_type": 1 00:12:23.770 }, 00:12:23.770 { 00:12:23.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.770 "dma_device_type": 2 00:12:23.770 }, 00:12:23.770 { 00:12:23.770 "dma_device_id": "system", 00:12:23.770 "dma_device_type": 1 00:12:23.770 }, 00:12:23.770 { 00:12:23.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.770 "dma_device_type": 2 00:12:23.770 }, 00:12:23.770 { 00:12:23.770 "dma_device_id": "system", 00:12:23.770 "dma_device_type": 1 00:12:23.770 }, 00:12:23.770 { 00:12:23.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.770 "dma_device_type": 2 00:12:23.770 } 00:12:23.770 ], 00:12:23.770 "driver_specific": { 00:12:23.770 "raid": { 00:12:23.770 "uuid": "1394334d-2e60-4941-a4d6-cf1eceddcf4f", 00:12:23.770 "strip_size_kb": 0, 00:12:23.770 "state": "online", 00:12:23.770 "raid_level": "raid1", 00:12:23.770 "superblock": true, 00:12:23.770 "num_base_bdevs": 4, 00:12:23.770 "num_base_bdevs_discovered": 4, 00:12:23.770 "num_base_bdevs_operational": 4, 00:12:23.770 "base_bdevs_list": [ 00:12:23.770 { 00:12:23.770 "name": "pt1", 00:12:23.770 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:23.770 "is_configured": true, 00:12:23.770 "data_offset": 2048, 00:12:23.770 "data_size": 63488 00:12:23.770 }, 00:12:23.770 { 00:12:23.770 "name": "pt2", 00:12:23.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.770 "is_configured": true, 00:12:23.770 "data_offset": 2048, 00:12:23.770 "data_size": 63488 00:12:23.770 }, 00:12:23.770 { 00:12:23.770 "name": "pt3", 00:12:23.770 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.770 "is_configured": true, 00:12:23.770 "data_offset": 2048, 00:12:23.770 "data_size": 63488 00:12:23.770 }, 00:12:23.770 { 00:12:23.770 "name": "pt4", 00:12:23.770 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:23.770 "is_configured": true, 00:12:23.770 "data_offset": 2048, 00:12:23.770 "data_size": 63488 00:12:23.770 } 00:12:23.770 ] 00:12:23.770 } 00:12:23.770 } 00:12:23.770 }' 00:12:23.770 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:23.770 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:23.770 pt2 00:12:23.770 pt3 00:12:23.770 pt4' 00:12:23.770 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.091 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:24.092 [2024-11-26 17:57:05.855061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1394334d-2e60-4941-a4d6-cf1eceddcf4f '!=' 1394334d-2e60-4941-a4d6-cf1eceddcf4f ']' 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.092 [2024-11-26 17:57:05.906653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.092 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.351 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.351 "name": "raid_bdev1", 00:12:24.351 "uuid": "1394334d-2e60-4941-a4d6-cf1eceddcf4f", 00:12:24.351 "strip_size_kb": 0, 00:12:24.351 "state": "online", 00:12:24.351 "raid_level": "raid1", 00:12:24.351 "superblock": true, 00:12:24.351 "num_base_bdevs": 4, 00:12:24.351 "num_base_bdevs_discovered": 3, 00:12:24.351 "num_base_bdevs_operational": 3, 00:12:24.351 "base_bdevs_list": [ 00:12:24.351 { 00:12:24.351 "name": null, 00:12:24.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.351 "is_configured": false, 00:12:24.351 "data_offset": 0, 00:12:24.351 "data_size": 63488 00:12:24.351 }, 00:12:24.351 { 00:12:24.351 "name": "pt2", 00:12:24.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.351 "is_configured": true, 00:12:24.351 "data_offset": 2048, 00:12:24.351 "data_size": 63488 00:12:24.351 }, 00:12:24.351 { 00:12:24.351 "name": "pt3", 00:12:24.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.351 "is_configured": true, 00:12:24.351 "data_offset": 2048, 00:12:24.351 "data_size": 63488 00:12:24.351 }, 00:12:24.351 { 00:12:24.351 "name": "pt4", 00:12:24.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:24.351 "is_configured": true, 00:12:24.351 "data_offset": 2048, 00:12:24.351 "data_size": 63488 00:12:24.351 } 00:12:24.351 ] 00:12:24.351 }' 00:12:24.351 17:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.351 17:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.609 [2024-11-26 17:57:06.394185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:24.609 [2024-11-26 17:57:06.394284] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.609 [2024-11-26 17:57:06.394418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.609 [2024-11-26 17:57:06.394543] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.609 [2024-11-26 17:57:06.394597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.609 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.868 [2024-11-26 17:57:06.494214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:24.868 [2024-11-26 17:57:06.494295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.868 [2024-11-26 17:57:06.494319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:24.868 [2024-11-26 17:57:06.494330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.868 [2024-11-26 17:57:06.496925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.868 [2024-11-26 17:57:06.496971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:24.868 [2024-11-26 17:57:06.497098] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:24.868 [2024-11-26 17:57:06.497153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:24.868 pt2 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.868 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.868 "name": "raid_bdev1", 00:12:24.868 "uuid": "1394334d-2e60-4941-a4d6-cf1eceddcf4f", 00:12:24.868 "strip_size_kb": 0, 00:12:24.868 "state": "configuring", 00:12:24.868 "raid_level": "raid1", 00:12:24.868 "superblock": true, 00:12:24.868 "num_base_bdevs": 4, 00:12:24.868 "num_base_bdevs_discovered": 1, 00:12:24.868 "num_base_bdevs_operational": 3, 00:12:24.868 "base_bdevs_list": [ 00:12:24.868 { 00:12:24.868 "name": null, 00:12:24.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.868 "is_configured": false, 00:12:24.868 "data_offset": 2048, 00:12:24.868 "data_size": 63488 00:12:24.868 }, 00:12:24.868 { 00:12:24.868 "name": "pt2", 00:12:24.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.868 "is_configured": true, 00:12:24.869 "data_offset": 2048, 00:12:24.869 "data_size": 63488 00:12:24.869 }, 00:12:24.869 { 00:12:24.869 "name": null, 00:12:24.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.869 "is_configured": false, 00:12:24.869 "data_offset": 2048, 00:12:24.869 "data_size": 63488 00:12:24.869 }, 00:12:24.869 { 00:12:24.869 "name": null, 00:12:24.869 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:24.869 "is_configured": false, 00:12:24.869 "data_offset": 2048, 00:12:24.869 "data_size": 63488 00:12:24.869 } 00:12:24.869 ] 00:12:24.869 }' 00:12:24.869 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.869 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.127 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:25.127 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:25.127 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:25.127 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.127 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.127 [2024-11-26 17:57:06.986052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:25.127 [2024-11-26 17:57:06.986207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.127 [2024-11-26 17:57:06.986277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:25.127 [2024-11-26 17:57:06.986320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.127 [2024-11-26 17:57:06.986978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.127 [2024-11-26 17:57:06.987070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:25.127 [2024-11-26 17:57:06.987222] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:25.127 [2024-11-26 17:57:06.987281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:25.386 pt3 00:12:25.386 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.386 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:25.386 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.386 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.386 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.386 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.386 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.386 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.386 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.386 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.386 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.386 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.386 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.386 17:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.386 17:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.386 17:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.386 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.386 "name": "raid_bdev1", 00:12:25.386 "uuid": "1394334d-2e60-4941-a4d6-cf1eceddcf4f", 00:12:25.386 "strip_size_kb": 0, 00:12:25.386 "state": "configuring", 00:12:25.386 "raid_level": "raid1", 00:12:25.386 "superblock": true, 00:12:25.386 "num_base_bdevs": 4, 00:12:25.386 "num_base_bdevs_discovered": 2, 00:12:25.386 "num_base_bdevs_operational": 3, 00:12:25.386 "base_bdevs_list": [ 00:12:25.386 { 00:12:25.386 "name": null, 00:12:25.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.386 "is_configured": false, 00:12:25.386 "data_offset": 2048, 00:12:25.386 "data_size": 63488 00:12:25.386 }, 00:12:25.386 { 00:12:25.386 "name": "pt2", 00:12:25.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.386 "is_configured": true, 00:12:25.386 "data_offset": 2048, 00:12:25.386 "data_size": 63488 00:12:25.386 }, 00:12:25.386 { 00:12:25.386 "name": "pt3", 00:12:25.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.386 "is_configured": true, 00:12:25.386 "data_offset": 2048, 00:12:25.386 "data_size": 63488 00:12:25.386 }, 00:12:25.386 { 00:12:25.386 "name": null, 00:12:25.386 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.386 "is_configured": false, 00:12:25.386 "data_offset": 2048, 00:12:25.386 "data_size": 63488 00:12:25.386 } 00:12:25.386 ] 00:12:25.386 }' 00:12:25.386 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.386 17:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.645 [2024-11-26 17:57:07.477347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:25.645 [2024-11-26 17:57:07.477444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.645 [2024-11-26 17:57:07.477478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:25.645 [2024-11-26 17:57:07.477489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.645 [2024-11-26 17:57:07.478008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.645 [2024-11-26 17:57:07.478046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:25.645 [2024-11-26 17:57:07.478155] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:25.645 [2024-11-26 17:57:07.478188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:25.645 [2024-11-26 17:57:07.478353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:25.645 [2024-11-26 17:57:07.478369] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:25.645 [2024-11-26 17:57:07.478654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:25.645 [2024-11-26 17:57:07.478826] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:25.645 [2024-11-26 17:57:07.478842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:25.645 [2024-11-26 17:57:07.479003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.645 pt4 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.645 17:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.904 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.904 "name": "raid_bdev1", 00:12:25.904 "uuid": "1394334d-2e60-4941-a4d6-cf1eceddcf4f", 00:12:25.904 "strip_size_kb": 0, 00:12:25.904 "state": "online", 00:12:25.904 "raid_level": "raid1", 00:12:25.904 "superblock": true, 00:12:25.904 "num_base_bdevs": 4, 00:12:25.904 "num_base_bdevs_discovered": 3, 00:12:25.904 "num_base_bdevs_operational": 3, 00:12:25.904 "base_bdevs_list": [ 00:12:25.904 { 00:12:25.904 "name": null, 00:12:25.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.904 "is_configured": false, 00:12:25.904 "data_offset": 2048, 00:12:25.904 "data_size": 63488 00:12:25.904 }, 00:12:25.904 { 00:12:25.904 "name": "pt2", 00:12:25.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.904 "is_configured": true, 00:12:25.904 "data_offset": 2048, 00:12:25.904 "data_size": 63488 00:12:25.904 }, 00:12:25.904 { 00:12:25.904 "name": "pt3", 00:12:25.904 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.904 "is_configured": true, 00:12:25.904 "data_offset": 2048, 00:12:25.904 "data_size": 63488 00:12:25.904 }, 00:12:25.904 { 00:12:25.904 "name": "pt4", 00:12:25.904 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.904 "is_configured": true, 00:12:25.904 "data_offset": 2048, 00:12:25.904 "data_size": 63488 00:12:25.904 } 00:12:25.904 ] 00:12:25.904 }' 00:12:25.905 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.905 17:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.164 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:26.164 17:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.164 17:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.164 [2024-11-26 17:57:07.960497] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.164 [2024-11-26 17:57:07.960603] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.164 [2024-11-26 17:57:07.960713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.164 [2024-11-26 17:57:07.960801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.164 [2024-11-26 17:57:07.960815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:26.164 17:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.164 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.164 17:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.164 17:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.164 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:26.164 17:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.164 17:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:26.164 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:26.164 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:26.164 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:26.164 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:26.164 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.164 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.164 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.164 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:26.164 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.164 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.164 [2024-11-26 17:57:08.020411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:26.164 [2024-11-26 17:57:08.020510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.164 [2024-11-26 17:57:08.020535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:26.164 [2024-11-26 17:57:08.020550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.164 [2024-11-26 17:57:08.023254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.164 [2024-11-26 17:57:08.023393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:26.164 [2024-11-26 17:57:08.023537] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:26.164 [2024-11-26 17:57:08.023610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:26.164 [2024-11-26 17:57:08.023806] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:26.164 [2024-11-26 17:57:08.023827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.164 [2024-11-26 17:57:08.023849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:26.164 [2024-11-26 17:57:08.023946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:26.164 [2024-11-26 17:57:08.024115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:26.424 pt1 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.424 "name": "raid_bdev1", 00:12:26.424 "uuid": "1394334d-2e60-4941-a4d6-cf1eceddcf4f", 00:12:26.424 "strip_size_kb": 0, 00:12:26.424 "state": "configuring", 00:12:26.424 "raid_level": "raid1", 00:12:26.424 "superblock": true, 00:12:26.424 "num_base_bdevs": 4, 00:12:26.424 "num_base_bdevs_discovered": 2, 00:12:26.424 "num_base_bdevs_operational": 3, 00:12:26.424 "base_bdevs_list": [ 00:12:26.424 { 00:12:26.424 "name": null, 00:12:26.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.424 "is_configured": false, 00:12:26.424 "data_offset": 2048, 00:12:26.424 "data_size": 63488 00:12:26.424 }, 00:12:26.424 { 00:12:26.424 "name": "pt2", 00:12:26.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.424 "is_configured": true, 00:12:26.424 "data_offset": 2048, 00:12:26.424 "data_size": 63488 00:12:26.424 }, 00:12:26.424 { 00:12:26.424 "name": "pt3", 00:12:26.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.424 "is_configured": true, 00:12:26.424 "data_offset": 2048, 00:12:26.424 "data_size": 63488 00:12:26.424 }, 00:12:26.424 { 00:12:26.424 "name": null, 00:12:26.424 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.424 "is_configured": false, 00:12:26.424 "data_offset": 2048, 00:12:26.424 "data_size": 63488 00:12:26.424 } 00:12:26.424 ] 00:12:26.424 }' 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.424 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.684 [2024-11-26 17:57:08.531680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:26.684 [2024-11-26 17:57:08.531771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.684 [2024-11-26 17:57:08.531800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:26.684 [2024-11-26 17:57:08.531811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.684 [2024-11-26 17:57:08.532355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.684 [2024-11-26 17:57:08.532384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:26.684 [2024-11-26 17:57:08.532492] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:26.684 [2024-11-26 17:57:08.532520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:26.684 [2024-11-26 17:57:08.532677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:26.684 [2024-11-26 17:57:08.532687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.684 [2024-11-26 17:57:08.532978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:26.684 [2024-11-26 17:57:08.533182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:26.684 [2024-11-26 17:57:08.533210] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:26.684 [2024-11-26 17:57:08.533381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.684 pt4 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.684 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.944 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.944 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.944 "name": "raid_bdev1", 00:12:26.944 "uuid": "1394334d-2e60-4941-a4d6-cf1eceddcf4f", 00:12:26.944 "strip_size_kb": 0, 00:12:26.944 "state": "online", 00:12:26.944 "raid_level": "raid1", 00:12:26.944 "superblock": true, 00:12:26.944 "num_base_bdevs": 4, 00:12:26.944 "num_base_bdevs_discovered": 3, 00:12:26.944 "num_base_bdevs_operational": 3, 00:12:26.944 "base_bdevs_list": [ 00:12:26.944 { 00:12:26.944 "name": null, 00:12:26.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.944 "is_configured": false, 00:12:26.944 "data_offset": 2048, 00:12:26.944 "data_size": 63488 00:12:26.944 }, 00:12:26.944 { 00:12:26.944 "name": "pt2", 00:12:26.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.944 "is_configured": true, 00:12:26.944 "data_offset": 2048, 00:12:26.944 "data_size": 63488 00:12:26.944 }, 00:12:26.944 { 00:12:26.944 "name": "pt3", 00:12:26.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.944 "is_configured": true, 00:12:26.944 "data_offset": 2048, 00:12:26.944 "data_size": 63488 00:12:26.944 }, 00:12:26.944 { 00:12:26.944 "name": "pt4", 00:12:26.944 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.944 "is_configured": true, 00:12:26.944 "data_offset": 2048, 00:12:26.944 "data_size": 63488 00:12:26.944 } 00:12:26.944 ] 00:12:26.944 }' 00:12:26.944 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.944 17:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.203 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:27.203 17:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:27.203 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.203 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.203 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.203 17:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:27.203 17:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:27.203 17:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:27.203 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.203 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.203 [2024-11-26 17:57:09.039243] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.203 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.467 17:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1394334d-2e60-4941-a4d6-cf1eceddcf4f '!=' 1394334d-2e60-4941-a4d6-cf1eceddcf4f ']' 00:12:27.467 17:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74839 00:12:27.467 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74839 ']' 00:12:27.467 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74839 00:12:27.467 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:27.467 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.467 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74839 00:12:27.467 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.467 killing process with pid 74839 00:12:27.467 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.467 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74839' 00:12:27.467 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74839 00:12:27.467 17:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74839 00:12:27.467 [2024-11-26 17:57:09.122532] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.467 [2024-11-26 17:57:09.122664] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.467 [2024-11-26 17:57:09.122767] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.467 [2024-11-26 17:57:09.122782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:28.041 [2024-11-26 17:57:09.609107] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.421 ************************************ 00:12:29.421 END TEST raid_superblock_test 00:12:29.421 ************************************ 00:12:29.421 17:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:29.421 00:12:29.421 real 0m9.333s 00:12:29.421 user 0m14.505s 00:12:29.421 sys 0m1.639s 00:12:29.421 17:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.421 17:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.421 17:57:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:29.421 17:57:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:29.421 17:57:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.421 17:57:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.421 ************************************ 00:12:29.421 START TEST raid_read_error_test 00:12:29.421 ************************************ 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HxLi8p3z1b 00:12:29.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75334 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75334 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75334 ']' 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.421 17:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:29.421 [2024-11-26 17:57:11.145296] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:12:29.421 [2024-11-26 17:57:11.145439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75334 ] 00:12:29.680 [2024-11-26 17:57:11.325123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.680 [2024-11-26 17:57:11.462979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.939 [2024-11-26 17:57:11.705939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.939 [2024-11-26 17:57:11.706040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.508 BaseBdev1_malloc 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.508 true 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.508 [2024-11-26 17:57:12.143686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:30.508 [2024-11-26 17:57:12.143773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.508 [2024-11-26 17:57:12.143802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:30.508 [2024-11-26 17:57:12.143816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.508 [2024-11-26 17:57:12.146425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.508 [2024-11-26 17:57:12.146480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:30.508 BaseBdev1 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.508 BaseBdev2_malloc 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.508 true 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.508 [2024-11-26 17:57:12.213099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:30.508 [2024-11-26 17:57:12.213262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.508 [2024-11-26 17:57:12.213291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:30.508 [2024-11-26 17:57:12.213305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.508 [2024-11-26 17:57:12.215882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.508 [2024-11-26 17:57:12.215932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:30.508 BaseBdev2 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.508 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.509 BaseBdev3_malloc 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.509 true 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.509 [2024-11-26 17:57:12.295255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:30.509 [2024-11-26 17:57:12.295428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.509 [2024-11-26 17:57:12.295461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:30.509 [2024-11-26 17:57:12.295476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.509 [2024-11-26 17:57:12.298300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.509 [2024-11-26 17:57:12.298355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:30.509 BaseBdev3 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.509 BaseBdev4_malloc 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.509 true 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.509 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.509 [2024-11-26 17:57:12.364822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:30.509 [2024-11-26 17:57:12.364909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.509 [2024-11-26 17:57:12.364937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:30.509 [2024-11-26 17:57:12.364953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.509 [2024-11-26 17:57:12.367490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.509 [2024-11-26 17:57:12.367545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:30.767 BaseBdev4 00:12:30.767 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.767 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:30.767 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.767 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.767 [2024-11-26 17:57:12.376888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.767 [2024-11-26 17:57:12.379241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.767 [2024-11-26 17:57:12.379335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:30.767 [2024-11-26 17:57:12.379409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:30.767 [2024-11-26 17:57:12.379703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:30.767 [2024-11-26 17:57:12.379723] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:30.767 [2024-11-26 17:57:12.380114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:30.767 [2024-11-26 17:57:12.380370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:30.767 [2024-11-26 17:57:12.380418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:30.767 [2024-11-26 17:57:12.380675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.767 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.768 "name": "raid_bdev1", 00:12:30.768 "uuid": "bbed629b-607f-4d29-88b6-3dff054ec117", 00:12:30.768 "strip_size_kb": 0, 00:12:30.768 "state": "online", 00:12:30.768 "raid_level": "raid1", 00:12:30.768 "superblock": true, 00:12:30.768 "num_base_bdevs": 4, 00:12:30.768 "num_base_bdevs_discovered": 4, 00:12:30.768 "num_base_bdevs_operational": 4, 00:12:30.768 "base_bdevs_list": [ 00:12:30.768 { 00:12:30.768 "name": "BaseBdev1", 00:12:30.768 "uuid": "88342572-b907-5ca5-929a-0f5a032e697f", 00:12:30.768 "is_configured": true, 00:12:30.768 "data_offset": 2048, 00:12:30.768 "data_size": 63488 00:12:30.768 }, 00:12:30.768 { 00:12:30.768 "name": "BaseBdev2", 00:12:30.768 "uuid": "c03790ef-bbbc-5d80-b09e-0ae4796a53b6", 00:12:30.768 "is_configured": true, 00:12:30.768 "data_offset": 2048, 00:12:30.768 "data_size": 63488 00:12:30.768 }, 00:12:30.768 { 00:12:30.768 "name": "BaseBdev3", 00:12:30.768 "uuid": "8bac7403-c5dd-5859-982d-919c60b0c11f", 00:12:30.768 "is_configured": true, 00:12:30.768 "data_offset": 2048, 00:12:30.768 "data_size": 63488 00:12:30.768 }, 00:12:30.768 { 00:12:30.768 "name": "BaseBdev4", 00:12:30.768 "uuid": "0c99904e-4c11-5648-b871-ee3adc14608c", 00:12:30.768 "is_configured": true, 00:12:30.768 "data_offset": 2048, 00:12:30.768 "data_size": 63488 00:12:30.768 } 00:12:30.768 ] 00:12:30.768 }' 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.768 17:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.026 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:31.026 17:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:31.285 [2024-11-26 17:57:12.961648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.220 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.220 "name": "raid_bdev1", 00:12:32.220 "uuid": "bbed629b-607f-4d29-88b6-3dff054ec117", 00:12:32.221 "strip_size_kb": 0, 00:12:32.221 "state": "online", 00:12:32.221 "raid_level": "raid1", 00:12:32.221 "superblock": true, 00:12:32.221 "num_base_bdevs": 4, 00:12:32.221 "num_base_bdevs_discovered": 4, 00:12:32.221 "num_base_bdevs_operational": 4, 00:12:32.221 "base_bdevs_list": [ 00:12:32.221 { 00:12:32.221 "name": "BaseBdev1", 00:12:32.221 "uuid": "88342572-b907-5ca5-929a-0f5a032e697f", 00:12:32.221 "is_configured": true, 00:12:32.221 "data_offset": 2048, 00:12:32.221 "data_size": 63488 00:12:32.221 }, 00:12:32.221 { 00:12:32.221 "name": "BaseBdev2", 00:12:32.221 "uuid": "c03790ef-bbbc-5d80-b09e-0ae4796a53b6", 00:12:32.221 "is_configured": true, 00:12:32.221 "data_offset": 2048, 00:12:32.221 "data_size": 63488 00:12:32.221 }, 00:12:32.221 { 00:12:32.221 "name": "BaseBdev3", 00:12:32.221 "uuid": "8bac7403-c5dd-5859-982d-919c60b0c11f", 00:12:32.221 "is_configured": true, 00:12:32.221 "data_offset": 2048, 00:12:32.221 "data_size": 63488 00:12:32.221 }, 00:12:32.221 { 00:12:32.221 "name": "BaseBdev4", 00:12:32.221 "uuid": "0c99904e-4c11-5648-b871-ee3adc14608c", 00:12:32.221 "is_configured": true, 00:12:32.221 "data_offset": 2048, 00:12:32.221 "data_size": 63488 00:12:32.221 } 00:12:32.221 ] 00:12:32.221 }' 00:12:32.221 17:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.221 17:57:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.479 17:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:32.479 17:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.479 17:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.479 [2024-11-26 17:57:14.306629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.479 [2024-11-26 17:57:14.306752] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.479 [2024-11-26 17:57:14.309909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.479 [2024-11-26 17:57:14.310037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.479 [2024-11-26 17:57:14.310216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.479 [2024-11-26 17:57:14.310277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:32.479 { 00:12:32.479 "results": [ 00:12:32.479 { 00:12:32.479 "job": "raid_bdev1", 00:12:32.479 "core_mask": "0x1", 00:12:32.479 "workload": "randrw", 00:12:32.479 "percentage": 50, 00:12:32.479 "status": "finished", 00:12:32.479 "queue_depth": 1, 00:12:32.479 "io_size": 131072, 00:12:32.479 "runtime": 1.345565, 00:12:32.479 "iops": 9048.243674590229, 00:12:32.479 "mibps": 1131.0304593237786, 00:12:32.479 "io_failed": 0, 00:12:32.479 "io_timeout": 0, 00:12:32.479 "avg_latency_us": 107.05980922320957, 00:12:32.479 "min_latency_us": 31.524890829694325, 00:12:32.479 "max_latency_us": 1903.1196506550218 00:12:32.479 } 00:12:32.479 ], 00:12:32.479 "core_count": 1 00:12:32.479 } 00:12:32.479 17:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.479 17:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75334 00:12:32.479 17:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75334 ']' 00:12:32.479 17:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75334 00:12:32.479 17:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:32.479 17:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.479 17:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75334 00:12:32.737 17:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:32.737 killing process with pid 75334 00:12:32.737 17:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:32.737 17:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75334' 00:12:32.737 17:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75334 00:12:32.737 [2024-11-26 17:57:14.357900] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.737 17:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75334 00:12:32.996 [2024-11-26 17:57:14.732425] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.384 17:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HxLi8p3z1b 00:12:34.384 17:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:34.384 17:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:34.384 17:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:34.384 17:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:34.384 17:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:34.384 17:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:34.384 17:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:34.384 00:12:34.384 real 0m5.172s 00:12:34.384 user 0m6.060s 00:12:34.384 sys 0m0.652s 00:12:34.384 17:57:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.384 ************************************ 00:12:34.384 END TEST raid_read_error_test 00:12:34.384 ************************************ 00:12:34.384 17:57:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.643 17:57:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:34.643 17:57:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:34.643 17:57:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.643 17:57:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:34.643 ************************************ 00:12:34.643 START TEST raid_write_error_test 00:12:34.643 ************************************ 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.balpW408O1 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75485 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75485 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75485 ']' 00:12:34.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.643 17:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.643 [2024-11-26 17:57:16.399476] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:12:34.643 [2024-11-26 17:57:16.399620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75485 ] 00:12:34.902 [2024-11-26 17:57:16.563815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.902 [2024-11-26 17:57:16.700168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.161 [2024-11-26 17:57:16.929256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.161 [2024-11-26 17:57:16.929406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.730 BaseBdev1_malloc 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.730 true 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.730 [2024-11-26 17:57:17.367552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:35.730 [2024-11-26 17:57:17.367632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.730 [2024-11-26 17:57:17.367658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:35.730 [2024-11-26 17:57:17.367670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.730 [2024-11-26 17:57:17.370173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.730 [2024-11-26 17:57:17.370233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:35.730 BaseBdev1 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.730 BaseBdev2_malloc 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.730 true 00:12:35.730 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.731 [2024-11-26 17:57:17.439521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:35.731 [2024-11-26 17:57:17.439600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.731 [2024-11-26 17:57:17.439624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:35.731 [2024-11-26 17:57:17.439636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.731 [2024-11-26 17:57:17.442170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.731 [2024-11-26 17:57:17.442218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:35.731 BaseBdev2 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.731 BaseBdev3_malloc 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.731 true 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.731 [2024-11-26 17:57:17.528733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:35.731 [2024-11-26 17:57:17.528807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.731 [2024-11-26 17:57:17.528831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:35.731 [2024-11-26 17:57:17.528843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.731 [2024-11-26 17:57:17.531386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.731 [2024-11-26 17:57:17.531435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:35.731 BaseBdev3 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.731 BaseBdev4_malloc 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.731 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.990 true 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.990 [2024-11-26 17:57:17.601534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:35.990 [2024-11-26 17:57:17.601628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.990 [2024-11-26 17:57:17.601656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:35.990 [2024-11-26 17:57:17.601669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.990 [2024-11-26 17:57:17.604207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.990 [2024-11-26 17:57:17.604263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:35.990 BaseBdev4 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.990 [2024-11-26 17:57:17.613593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.990 [2024-11-26 17:57:17.615839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.990 [2024-11-26 17:57:17.615930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.990 [2024-11-26 17:57:17.615995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:35.990 [2024-11-26 17:57:17.616291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:35.990 [2024-11-26 17:57:17.616313] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:35.990 [2024-11-26 17:57:17.616629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:35.990 [2024-11-26 17:57:17.616837] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:35.990 [2024-11-26 17:57:17.616848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:35.990 [2024-11-26 17:57:17.617119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.990 "name": "raid_bdev1", 00:12:35.990 "uuid": "f0ed539b-9bc8-44eb-b72c-c0234f836ff0", 00:12:35.990 "strip_size_kb": 0, 00:12:35.990 "state": "online", 00:12:35.990 "raid_level": "raid1", 00:12:35.990 "superblock": true, 00:12:35.990 "num_base_bdevs": 4, 00:12:35.990 "num_base_bdevs_discovered": 4, 00:12:35.990 "num_base_bdevs_operational": 4, 00:12:35.990 "base_bdevs_list": [ 00:12:35.990 { 00:12:35.990 "name": "BaseBdev1", 00:12:35.990 "uuid": "0369ad45-ee9d-5965-976a-610deadfdc75", 00:12:35.990 "is_configured": true, 00:12:35.990 "data_offset": 2048, 00:12:35.990 "data_size": 63488 00:12:35.990 }, 00:12:35.990 { 00:12:35.990 "name": "BaseBdev2", 00:12:35.990 "uuid": "c34e13c4-5692-5945-8bc2-10ec7dadd3ea", 00:12:35.990 "is_configured": true, 00:12:35.990 "data_offset": 2048, 00:12:35.990 "data_size": 63488 00:12:35.990 }, 00:12:35.990 { 00:12:35.990 "name": "BaseBdev3", 00:12:35.990 "uuid": "ed4820b3-97df-5e26-9789-9424299397bc", 00:12:35.990 "is_configured": true, 00:12:35.990 "data_offset": 2048, 00:12:35.990 "data_size": 63488 00:12:35.990 }, 00:12:35.990 { 00:12:35.990 "name": "BaseBdev4", 00:12:35.990 "uuid": "96b28343-84af-58e0-8db2-40dd7d0780b0", 00:12:35.990 "is_configured": true, 00:12:35.990 "data_offset": 2048, 00:12:35.990 "data_size": 63488 00:12:35.990 } 00:12:35.990 ] 00:12:35.990 }' 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.990 17:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.249 17:57:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:36.249 17:57:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:36.507 [2024-11-26 17:57:18.194220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.444 [2024-11-26 17:57:19.096642] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:37.444 [2024-11-26 17:57:19.096718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:37.444 [2024-11-26 17:57:19.096982] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.444 "name": "raid_bdev1", 00:12:37.444 "uuid": "f0ed539b-9bc8-44eb-b72c-c0234f836ff0", 00:12:37.444 "strip_size_kb": 0, 00:12:37.444 "state": "online", 00:12:37.444 "raid_level": "raid1", 00:12:37.444 "superblock": true, 00:12:37.444 "num_base_bdevs": 4, 00:12:37.444 "num_base_bdevs_discovered": 3, 00:12:37.444 "num_base_bdevs_operational": 3, 00:12:37.444 "base_bdevs_list": [ 00:12:37.444 { 00:12:37.444 "name": null, 00:12:37.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.444 "is_configured": false, 00:12:37.444 "data_offset": 0, 00:12:37.444 "data_size": 63488 00:12:37.444 }, 00:12:37.444 { 00:12:37.444 "name": "BaseBdev2", 00:12:37.444 "uuid": "c34e13c4-5692-5945-8bc2-10ec7dadd3ea", 00:12:37.444 "is_configured": true, 00:12:37.444 "data_offset": 2048, 00:12:37.444 "data_size": 63488 00:12:37.444 }, 00:12:37.444 { 00:12:37.444 "name": "BaseBdev3", 00:12:37.444 "uuid": "ed4820b3-97df-5e26-9789-9424299397bc", 00:12:37.444 "is_configured": true, 00:12:37.444 "data_offset": 2048, 00:12:37.444 "data_size": 63488 00:12:37.444 }, 00:12:37.444 { 00:12:37.444 "name": "BaseBdev4", 00:12:37.444 "uuid": "96b28343-84af-58e0-8db2-40dd7d0780b0", 00:12:37.444 "is_configured": true, 00:12:37.444 "data_offset": 2048, 00:12:37.444 "data_size": 63488 00:12:37.444 } 00:12:37.444 ] 00:12:37.444 }' 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.444 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.703 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:37.703 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.703 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.963 [2024-11-26 17:57:19.566489] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.963 [2024-11-26 17:57:19.566626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.963 [2024-11-26 17:57:19.570164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.963 [2024-11-26 17:57:19.570241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.963 [2024-11-26 17:57:19.570365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.963 [2024-11-26 17:57:19.570380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:37.963 { 00:12:37.963 "results": [ 00:12:37.963 { 00:12:37.963 "job": "raid_bdev1", 00:12:37.963 "core_mask": "0x1", 00:12:37.963 "workload": "randrw", 00:12:37.963 "percentage": 50, 00:12:37.963 "status": "finished", 00:12:37.963 "queue_depth": 1, 00:12:37.963 "io_size": 131072, 00:12:37.963 "runtime": 1.373004, 00:12:37.963 "iops": 9935.149497015303, 00:12:37.963 "mibps": 1241.893687126913, 00:12:37.963 "io_failed": 0, 00:12:37.963 "io_timeout": 0, 00:12:37.963 "avg_latency_us": 97.37474688591323, 00:12:37.963 "min_latency_us": 26.494323144104804, 00:12:37.963 "max_latency_us": 1810.1100436681222 00:12:37.963 } 00:12:37.963 ], 00:12:37.963 "core_count": 1 00:12:37.963 } 00:12:37.963 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.963 17:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75485 00:12:37.963 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75485 ']' 00:12:37.963 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75485 00:12:37.963 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:37.963 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.963 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75485 00:12:37.963 killing process with pid 75485 00:12:37.963 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.963 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.963 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75485' 00:12:37.963 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75485 00:12:37.963 [2024-11-26 17:57:19.618642] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.963 17:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75485 00:12:38.289 [2024-11-26 17:57:20.008842] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:39.694 17:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:39.694 17:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.balpW408O1 00:12:39.694 17:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:39.694 17:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:39.694 17:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:39.694 17:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:39.694 17:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:39.694 17:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:39.694 00:12:39.694 real 0m5.161s 00:12:39.694 user 0m6.109s 00:12:39.694 sys 0m0.609s 00:12:39.694 17:57:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.694 ************************************ 00:12:39.694 END TEST raid_write_error_test 00:12:39.694 ************************************ 00:12:39.694 17:57:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.694 17:57:21 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:39.694 17:57:21 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:39.694 17:57:21 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:39.694 17:57:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:39.694 17:57:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.694 17:57:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:39.694 ************************************ 00:12:39.694 START TEST raid_rebuild_test 00:12:39.694 ************************************ 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75634 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75634 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75634 ']' 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.694 17:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.953 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:39.953 Zero copy mechanism will not be used. 00:12:39.953 [2024-11-26 17:57:21.618099] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:12:39.954 [2024-11-26 17:57:21.618240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75634 ] 00:12:39.954 [2024-11-26 17:57:21.798240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.212 [2024-11-26 17:57:21.933851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.472 [2024-11-26 17:57:22.175610] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.472 [2024-11-26 17:57:22.175691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.731 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.731 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:40.731 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.731 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:40.731 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.731 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.731 BaseBdev1_malloc 00:12:40.731 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.731 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:40.731 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.731 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.731 [2024-11-26 17:57:22.587503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:40.732 [2024-11-26 17:57:22.587649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.732 [2024-11-26 17:57:22.587701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:40.732 [2024-11-26 17:57:22.587721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.732 [2024-11-26 17:57:22.590361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.732 [2024-11-26 17:57:22.590411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:40.732 BaseBdev1 00:12:40.732 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.732 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.732 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:40.732 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.732 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.991 BaseBdev2_malloc 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.991 [2024-11-26 17:57:22.649044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:40.991 [2024-11-26 17:57:22.649127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.991 [2024-11-26 17:57:22.649157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:40.991 [2024-11-26 17:57:22.649169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.991 [2024-11-26 17:57:22.651681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.991 [2024-11-26 17:57:22.651730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:40.991 BaseBdev2 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.991 spare_malloc 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.991 spare_delay 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.991 [2024-11-26 17:57:22.737136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:40.991 [2024-11-26 17:57:22.737242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.991 [2024-11-26 17:57:22.737273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:40.991 [2024-11-26 17:57:22.737287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.991 [2024-11-26 17:57:22.739884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.991 [2024-11-26 17:57:22.740030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:40.991 spare 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.991 [2024-11-26 17:57:22.749173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.991 [2024-11-26 17:57:22.751356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.991 [2024-11-26 17:57:22.751478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:40.991 [2024-11-26 17:57:22.751495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:40.991 [2024-11-26 17:57:22.751830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:40.991 [2024-11-26 17:57:22.752054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:40.991 [2024-11-26 17:57:22.752069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:40.991 [2024-11-26 17:57:22.752280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.991 "name": "raid_bdev1", 00:12:40.991 "uuid": "0d24dec2-51ca-4656-91e2-8d88eec700d4", 00:12:40.991 "strip_size_kb": 0, 00:12:40.991 "state": "online", 00:12:40.991 "raid_level": "raid1", 00:12:40.991 "superblock": false, 00:12:40.991 "num_base_bdevs": 2, 00:12:40.991 "num_base_bdevs_discovered": 2, 00:12:40.991 "num_base_bdevs_operational": 2, 00:12:40.991 "base_bdevs_list": [ 00:12:40.991 { 00:12:40.991 "name": "BaseBdev1", 00:12:40.991 "uuid": "5472defb-0464-5384-a2df-fbeaf7338815", 00:12:40.991 "is_configured": true, 00:12:40.991 "data_offset": 0, 00:12:40.991 "data_size": 65536 00:12:40.991 }, 00:12:40.991 { 00:12:40.991 "name": "BaseBdev2", 00:12:40.991 "uuid": "019a234b-7c1e-5f27-b3a6-8b5189e79003", 00:12:40.991 "is_configured": true, 00:12:40.991 "data_offset": 0, 00:12:40.991 "data_size": 65536 00:12:40.991 } 00:12:40.991 ] 00:12:40.991 }' 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.991 17:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.560 [2024-11-26 17:57:23.236700] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.560 17:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:41.821 [2024-11-26 17:57:23.552098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:41.821 /dev/nbd0 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.821 1+0 records in 00:12:41.821 1+0 records out 00:12:41.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339107 s, 12.1 MB/s 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:41.821 17:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:47.122 65536+0 records in 00:12:47.122 65536+0 records out 00:12:47.122 33554432 bytes (34 MB, 32 MiB) copied, 4.86464 s, 6.9 MB/s 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:47.122 [2024-11-26 17:57:28.755195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.122 [2024-11-26 17:57:28.771282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.122 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.122 "name": "raid_bdev1", 00:12:47.122 "uuid": "0d24dec2-51ca-4656-91e2-8d88eec700d4", 00:12:47.122 "strip_size_kb": 0, 00:12:47.122 "state": "online", 00:12:47.122 "raid_level": "raid1", 00:12:47.122 "superblock": false, 00:12:47.122 "num_base_bdevs": 2, 00:12:47.122 "num_base_bdevs_discovered": 1, 00:12:47.122 "num_base_bdevs_operational": 1, 00:12:47.122 "base_bdevs_list": [ 00:12:47.122 { 00:12:47.122 "name": null, 00:12:47.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.122 "is_configured": false, 00:12:47.122 "data_offset": 0, 00:12:47.122 "data_size": 65536 00:12:47.122 }, 00:12:47.122 { 00:12:47.122 "name": "BaseBdev2", 00:12:47.122 "uuid": "019a234b-7c1e-5f27-b3a6-8b5189e79003", 00:12:47.122 "is_configured": true, 00:12:47.122 "data_offset": 0, 00:12:47.123 "data_size": 65536 00:12:47.123 } 00:12:47.123 ] 00:12:47.123 }' 00:12:47.123 17:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.123 17:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.691 17:57:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:47.691 17:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.691 17:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.691 [2024-11-26 17:57:29.262501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:47.691 [2024-11-26 17:57:29.279905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:47.691 17:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.691 17:57:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:47.692 [2024-11-26 17:57:29.281806] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.632 "name": "raid_bdev1", 00:12:48.632 "uuid": "0d24dec2-51ca-4656-91e2-8d88eec700d4", 00:12:48.632 "strip_size_kb": 0, 00:12:48.632 "state": "online", 00:12:48.632 "raid_level": "raid1", 00:12:48.632 "superblock": false, 00:12:48.632 "num_base_bdevs": 2, 00:12:48.632 "num_base_bdevs_discovered": 2, 00:12:48.632 "num_base_bdevs_operational": 2, 00:12:48.632 "process": { 00:12:48.632 "type": "rebuild", 00:12:48.632 "target": "spare", 00:12:48.632 "progress": { 00:12:48.632 "blocks": 20480, 00:12:48.632 "percent": 31 00:12:48.632 } 00:12:48.632 }, 00:12:48.632 "base_bdevs_list": [ 00:12:48.632 { 00:12:48.632 "name": "spare", 00:12:48.632 "uuid": "828c2594-de42-5b02-b5e6-afb2ee592ee2", 00:12:48.632 "is_configured": true, 00:12:48.632 "data_offset": 0, 00:12:48.632 "data_size": 65536 00:12:48.632 }, 00:12:48.632 { 00:12:48.632 "name": "BaseBdev2", 00:12:48.632 "uuid": "019a234b-7c1e-5f27-b3a6-8b5189e79003", 00:12:48.632 "is_configured": true, 00:12:48.632 "data_offset": 0, 00:12:48.632 "data_size": 65536 00:12:48.632 } 00:12:48.632 ] 00:12:48.632 }' 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.632 17:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.632 [2024-11-26 17:57:30.441100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.632 [2024-11-26 17:57:30.488351] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:48.632 [2024-11-26 17:57:30.488505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.632 [2024-11-26 17:57:30.488526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.632 [2024-11-26 17:57:30.488537] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.891 "name": "raid_bdev1", 00:12:48.891 "uuid": "0d24dec2-51ca-4656-91e2-8d88eec700d4", 00:12:48.891 "strip_size_kb": 0, 00:12:48.891 "state": "online", 00:12:48.891 "raid_level": "raid1", 00:12:48.891 "superblock": false, 00:12:48.891 "num_base_bdevs": 2, 00:12:48.891 "num_base_bdevs_discovered": 1, 00:12:48.891 "num_base_bdevs_operational": 1, 00:12:48.891 "base_bdevs_list": [ 00:12:48.891 { 00:12:48.891 "name": null, 00:12:48.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.891 "is_configured": false, 00:12:48.891 "data_offset": 0, 00:12:48.891 "data_size": 65536 00:12:48.891 }, 00:12:48.891 { 00:12:48.891 "name": "BaseBdev2", 00:12:48.891 "uuid": "019a234b-7c1e-5f27-b3a6-8b5189e79003", 00:12:48.891 "is_configured": true, 00:12:48.891 "data_offset": 0, 00:12:48.891 "data_size": 65536 00:12:48.891 } 00:12:48.891 ] 00:12:48.891 }' 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.891 17:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.150 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:49.150 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.150 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:49.150 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:49.150 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.150 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.150 17:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.150 17:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.150 17:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.150 17:57:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.408 17:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.408 "name": "raid_bdev1", 00:12:49.408 "uuid": "0d24dec2-51ca-4656-91e2-8d88eec700d4", 00:12:49.408 "strip_size_kb": 0, 00:12:49.408 "state": "online", 00:12:49.408 "raid_level": "raid1", 00:12:49.408 "superblock": false, 00:12:49.408 "num_base_bdevs": 2, 00:12:49.408 "num_base_bdevs_discovered": 1, 00:12:49.408 "num_base_bdevs_operational": 1, 00:12:49.408 "base_bdevs_list": [ 00:12:49.408 { 00:12:49.408 "name": null, 00:12:49.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.408 "is_configured": false, 00:12:49.408 "data_offset": 0, 00:12:49.408 "data_size": 65536 00:12:49.408 }, 00:12:49.408 { 00:12:49.408 "name": "BaseBdev2", 00:12:49.408 "uuid": "019a234b-7c1e-5f27-b3a6-8b5189e79003", 00:12:49.408 "is_configured": true, 00:12:49.408 "data_offset": 0, 00:12:49.408 "data_size": 65536 00:12:49.408 } 00:12:49.408 ] 00:12:49.408 }' 00:12:49.408 17:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.408 17:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:49.408 17:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.408 17:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:49.408 17:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:49.408 17:57:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.408 17:57:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.408 [2024-11-26 17:57:31.152006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:49.408 [2024-11-26 17:57:31.171922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:49.408 17:57:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.408 17:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:49.408 [2024-11-26 17:57:31.174257] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:50.343 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.343 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.343 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.343 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.343 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.343 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.343 17:57:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.343 17:57:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.343 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.343 17:57:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.602 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.602 "name": "raid_bdev1", 00:12:50.602 "uuid": "0d24dec2-51ca-4656-91e2-8d88eec700d4", 00:12:50.602 "strip_size_kb": 0, 00:12:50.602 "state": "online", 00:12:50.602 "raid_level": "raid1", 00:12:50.602 "superblock": false, 00:12:50.602 "num_base_bdevs": 2, 00:12:50.602 "num_base_bdevs_discovered": 2, 00:12:50.602 "num_base_bdevs_operational": 2, 00:12:50.602 "process": { 00:12:50.602 "type": "rebuild", 00:12:50.602 "target": "spare", 00:12:50.602 "progress": { 00:12:50.602 "blocks": 20480, 00:12:50.602 "percent": 31 00:12:50.602 } 00:12:50.602 }, 00:12:50.602 "base_bdevs_list": [ 00:12:50.602 { 00:12:50.602 "name": "spare", 00:12:50.602 "uuid": "828c2594-de42-5b02-b5e6-afb2ee592ee2", 00:12:50.602 "is_configured": true, 00:12:50.602 "data_offset": 0, 00:12:50.602 "data_size": 65536 00:12:50.602 }, 00:12:50.602 { 00:12:50.602 "name": "BaseBdev2", 00:12:50.602 "uuid": "019a234b-7c1e-5f27-b3a6-8b5189e79003", 00:12:50.602 "is_configured": true, 00:12:50.602 "data_offset": 0, 00:12:50.602 "data_size": 65536 00:12:50.602 } 00:12:50.602 ] 00:12:50.602 }' 00:12:50.602 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.602 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.602 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.602 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.602 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:50.602 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:50.602 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:50.602 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:50.602 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=393 00:12:50.602 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.602 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.603 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.603 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.603 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.603 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.603 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.603 17:57:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.603 17:57:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.603 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.603 17:57:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.603 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.603 "name": "raid_bdev1", 00:12:50.603 "uuid": "0d24dec2-51ca-4656-91e2-8d88eec700d4", 00:12:50.603 "strip_size_kb": 0, 00:12:50.603 "state": "online", 00:12:50.603 "raid_level": "raid1", 00:12:50.603 "superblock": false, 00:12:50.603 "num_base_bdevs": 2, 00:12:50.603 "num_base_bdevs_discovered": 2, 00:12:50.603 "num_base_bdevs_operational": 2, 00:12:50.603 "process": { 00:12:50.603 "type": "rebuild", 00:12:50.603 "target": "spare", 00:12:50.603 "progress": { 00:12:50.603 "blocks": 22528, 00:12:50.603 "percent": 34 00:12:50.603 } 00:12:50.603 }, 00:12:50.603 "base_bdevs_list": [ 00:12:50.603 { 00:12:50.603 "name": "spare", 00:12:50.603 "uuid": "828c2594-de42-5b02-b5e6-afb2ee592ee2", 00:12:50.603 "is_configured": true, 00:12:50.603 "data_offset": 0, 00:12:50.603 "data_size": 65536 00:12:50.603 }, 00:12:50.603 { 00:12:50.603 "name": "BaseBdev2", 00:12:50.603 "uuid": "019a234b-7c1e-5f27-b3a6-8b5189e79003", 00:12:50.603 "is_configured": true, 00:12:50.603 "data_offset": 0, 00:12:50.603 "data_size": 65536 00:12:50.603 } 00:12:50.603 ] 00:12:50.603 }' 00:12:50.603 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.603 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.603 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.862 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.863 17:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.799 "name": "raid_bdev1", 00:12:51.799 "uuid": "0d24dec2-51ca-4656-91e2-8d88eec700d4", 00:12:51.799 "strip_size_kb": 0, 00:12:51.799 "state": "online", 00:12:51.799 "raid_level": "raid1", 00:12:51.799 "superblock": false, 00:12:51.799 "num_base_bdevs": 2, 00:12:51.799 "num_base_bdevs_discovered": 2, 00:12:51.799 "num_base_bdevs_operational": 2, 00:12:51.799 "process": { 00:12:51.799 "type": "rebuild", 00:12:51.799 "target": "spare", 00:12:51.799 "progress": { 00:12:51.799 "blocks": 47104, 00:12:51.799 "percent": 71 00:12:51.799 } 00:12:51.799 }, 00:12:51.799 "base_bdevs_list": [ 00:12:51.799 { 00:12:51.799 "name": "spare", 00:12:51.799 "uuid": "828c2594-de42-5b02-b5e6-afb2ee592ee2", 00:12:51.799 "is_configured": true, 00:12:51.799 "data_offset": 0, 00:12:51.799 "data_size": 65536 00:12:51.799 }, 00:12:51.799 { 00:12:51.799 "name": "BaseBdev2", 00:12:51.799 "uuid": "019a234b-7c1e-5f27-b3a6-8b5189e79003", 00:12:51.799 "is_configured": true, 00:12:51.799 "data_offset": 0, 00:12:51.799 "data_size": 65536 00:12:51.799 } 00:12:51.799 ] 00:12:51.799 }' 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.799 17:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:52.738 [2024-11-26 17:57:34.392173] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:52.738 [2024-11-26 17:57:34.392414] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:52.738 [2024-11-26 17:57:34.392516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.998 "name": "raid_bdev1", 00:12:52.998 "uuid": "0d24dec2-51ca-4656-91e2-8d88eec700d4", 00:12:52.998 "strip_size_kb": 0, 00:12:52.998 "state": "online", 00:12:52.998 "raid_level": "raid1", 00:12:52.998 "superblock": false, 00:12:52.998 "num_base_bdevs": 2, 00:12:52.998 "num_base_bdevs_discovered": 2, 00:12:52.998 "num_base_bdevs_operational": 2, 00:12:52.998 "base_bdevs_list": [ 00:12:52.998 { 00:12:52.998 "name": "spare", 00:12:52.998 "uuid": "828c2594-de42-5b02-b5e6-afb2ee592ee2", 00:12:52.998 "is_configured": true, 00:12:52.998 "data_offset": 0, 00:12:52.998 "data_size": 65536 00:12:52.998 }, 00:12:52.998 { 00:12:52.998 "name": "BaseBdev2", 00:12:52.998 "uuid": "019a234b-7c1e-5f27-b3a6-8b5189e79003", 00:12:52.998 "is_configured": true, 00:12:52.998 "data_offset": 0, 00:12:52.998 "data_size": 65536 00:12:52.998 } 00:12:52.998 ] 00:12:52.998 }' 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.998 "name": "raid_bdev1", 00:12:52.998 "uuid": "0d24dec2-51ca-4656-91e2-8d88eec700d4", 00:12:52.998 "strip_size_kb": 0, 00:12:52.998 "state": "online", 00:12:52.998 "raid_level": "raid1", 00:12:52.998 "superblock": false, 00:12:52.998 "num_base_bdevs": 2, 00:12:52.998 "num_base_bdevs_discovered": 2, 00:12:52.998 "num_base_bdevs_operational": 2, 00:12:52.998 "base_bdevs_list": [ 00:12:52.998 { 00:12:52.998 "name": "spare", 00:12:52.998 "uuid": "828c2594-de42-5b02-b5e6-afb2ee592ee2", 00:12:52.998 "is_configured": true, 00:12:52.998 "data_offset": 0, 00:12:52.998 "data_size": 65536 00:12:52.998 }, 00:12:52.998 { 00:12:52.998 "name": "BaseBdev2", 00:12:52.998 "uuid": "019a234b-7c1e-5f27-b3a6-8b5189e79003", 00:12:52.998 "is_configured": true, 00:12:52.998 "data_offset": 0, 00:12:52.998 "data_size": 65536 00:12:52.998 } 00:12:52.998 ] 00:12:52.998 }' 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.998 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.264 "name": "raid_bdev1", 00:12:53.264 "uuid": "0d24dec2-51ca-4656-91e2-8d88eec700d4", 00:12:53.264 "strip_size_kb": 0, 00:12:53.264 "state": "online", 00:12:53.264 "raid_level": "raid1", 00:12:53.264 "superblock": false, 00:12:53.264 "num_base_bdevs": 2, 00:12:53.264 "num_base_bdevs_discovered": 2, 00:12:53.264 "num_base_bdevs_operational": 2, 00:12:53.264 "base_bdevs_list": [ 00:12:53.264 { 00:12:53.264 "name": "spare", 00:12:53.264 "uuid": "828c2594-de42-5b02-b5e6-afb2ee592ee2", 00:12:53.264 "is_configured": true, 00:12:53.264 "data_offset": 0, 00:12:53.264 "data_size": 65536 00:12:53.264 }, 00:12:53.264 { 00:12:53.264 "name": "BaseBdev2", 00:12:53.264 "uuid": "019a234b-7c1e-5f27-b3a6-8b5189e79003", 00:12:53.264 "is_configured": true, 00:12:53.264 "data_offset": 0, 00:12:53.264 "data_size": 65536 00:12:53.264 } 00:12:53.264 ] 00:12:53.264 }' 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.264 17:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.532 17:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:53.532 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.532 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.532 [2024-11-26 17:57:35.338746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:53.532 [2024-11-26 17:57:35.338791] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:53.532 [2024-11-26 17:57:35.338908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.532 [2024-11-26 17:57:35.338988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:53.532 [2024-11-26 17:57:35.339000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:53.532 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.532 17:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:53.532 17:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.532 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.532 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.532 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.791 17:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:53.791 17:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:53.791 17:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:53.791 17:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:53.791 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:53.792 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:53.792 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:53.792 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:53.792 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:53.792 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:53.792 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:53.792 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:53.792 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:54.052 /dev/nbd0 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.052 1+0 records in 00:12:54.052 1+0 records out 00:12:54.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304259 s, 13.5 MB/s 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:54.052 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:54.313 /dev/nbd1 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.313 1+0 records in 00:12:54.313 1+0 records out 00:12:54.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334533 s, 12.2 MB/s 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:54.313 17:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:54.574 17:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:54.574 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:54.574 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:54.574 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:54.574 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:54.574 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.574 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:54.574 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:54.574 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:54.574 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:54.574 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.574 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.574 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75634 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75634 ']' 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75634 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.834 17:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75634 00:12:55.092 killing process with pid 75634 00:12:55.093 Received shutdown signal, test time was about 60.000000 seconds 00:12:55.093 00:12:55.093 Latency(us) 00:12:55.093 [2024-11-26T17:57:36.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.093 [2024-11-26T17:57:36.956Z] =================================================================================================================== 00:12:55.093 [2024-11-26T17:57:36.956Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:55.093 17:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.093 17:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.093 17:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75634' 00:12:55.093 17:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75634 00:12:55.093 17:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75634 00:12:55.093 [2024-11-26 17:57:36.720731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:55.352 [2024-11-26 17:57:37.086116] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:56.731 17:57:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:56.731 00:12:56.731 real 0m16.904s 00:12:56.731 user 0m19.179s 00:12:56.731 sys 0m3.348s 00:12:56.731 17:57:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.731 17:57:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.731 ************************************ 00:12:56.731 END TEST raid_rebuild_test 00:12:56.731 ************************************ 00:12:56.731 17:57:38 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:56.731 17:57:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:56.731 17:57:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.731 17:57:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:56.731 ************************************ 00:12:56.731 START TEST raid_rebuild_test_sb 00:12:56.731 ************************************ 00:12:56.731 17:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:56.731 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:56.731 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:56.731 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:56.731 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:56.731 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:56.731 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:56.731 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:56.731 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76069 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76069 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76069 ']' 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.732 17:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.732 [2024-11-26 17:57:38.578062] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:12:56.732 [2024-11-26 17:57:38.578291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76069 ] 00:12:56.732 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:56.732 Zero copy mechanism will not be used. 00:12:56.990 [2024-11-26 17:57:38.758437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.250 [2024-11-26 17:57:38.891648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.564 [2024-11-26 17:57:39.115144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.564 [2024-11-26 17:57:39.115220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.841 BaseBdev1_malloc 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.841 [2024-11-26 17:57:39.602929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:57.841 [2024-11-26 17:57:39.603050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.841 [2024-11-26 17:57:39.603085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:57.841 [2024-11-26 17:57:39.603119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.841 [2024-11-26 17:57:39.605746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.841 [2024-11-26 17:57:39.605808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:57.841 BaseBdev1 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.841 BaseBdev2_malloc 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.841 [2024-11-26 17:57:39.663078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:57.841 [2024-11-26 17:57:39.663237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.841 [2024-11-26 17:57:39.663274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:57.841 [2024-11-26 17:57:39.663288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.841 [2024-11-26 17:57:39.665827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.841 [2024-11-26 17:57:39.665880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:57.841 BaseBdev2 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.841 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.103 spare_malloc 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.103 spare_delay 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.103 [2024-11-26 17:57:39.743900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:58.103 [2024-11-26 17:57:39.744107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.103 [2024-11-26 17:57:39.744145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:58.103 [2024-11-26 17:57:39.744158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.103 [2024-11-26 17:57:39.746798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.103 [2024-11-26 17:57:39.746857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:58.103 spare 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.103 [2024-11-26 17:57:39.755974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.103 [2024-11-26 17:57:39.758200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.103 [2024-11-26 17:57:39.758443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:58.103 [2024-11-26 17:57:39.758471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:58.103 [2024-11-26 17:57:39.758808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:58.103 [2024-11-26 17:57:39.759031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:58.103 [2024-11-26 17:57:39.759050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:58.103 [2024-11-26 17:57:39.759270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.103 "name": "raid_bdev1", 00:12:58.103 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:12:58.103 "strip_size_kb": 0, 00:12:58.103 "state": "online", 00:12:58.103 "raid_level": "raid1", 00:12:58.103 "superblock": true, 00:12:58.103 "num_base_bdevs": 2, 00:12:58.103 "num_base_bdevs_discovered": 2, 00:12:58.103 "num_base_bdevs_operational": 2, 00:12:58.103 "base_bdevs_list": [ 00:12:58.103 { 00:12:58.103 "name": "BaseBdev1", 00:12:58.103 "uuid": "3729584b-dcd1-50cd-805d-8982dd272357", 00:12:58.103 "is_configured": true, 00:12:58.103 "data_offset": 2048, 00:12:58.103 "data_size": 63488 00:12:58.103 }, 00:12:58.103 { 00:12:58.103 "name": "BaseBdev2", 00:12:58.103 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:12:58.103 "is_configured": true, 00:12:58.103 "data_offset": 2048, 00:12:58.103 "data_size": 63488 00:12:58.103 } 00:12:58.103 ] 00:12:58.103 }' 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.103 17:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.362 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:58.362 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:58.362 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.362 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.362 [2024-11-26 17:57:40.187547] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.362 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.362 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:58.362 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:58.362 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.362 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.362 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.622 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.622 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:58.622 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:58.622 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:58.622 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:58.622 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:58.622 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.623 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:58.623 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.623 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:58.623 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.623 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:58.623 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.623 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.623 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:58.882 [2024-11-26 17:57:40.498786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:58.882 /dev/nbd0 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.882 1+0 records in 00:12:58.882 1+0 records out 00:12:58.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603014 s, 6.8 MB/s 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:58.882 17:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:04.150 63488+0 records in 00:13:04.150 63488+0 records out 00:13:04.150 32505856 bytes (33 MB, 31 MiB) copied, 4.91479 s, 6.6 MB/s 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:04.150 [2024-11-26 17:57:45.711862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.150 [2024-11-26 17:57:45.752392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.150 17:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.151 17:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.151 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.151 "name": "raid_bdev1", 00:13:04.151 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:04.151 "strip_size_kb": 0, 00:13:04.151 "state": "online", 00:13:04.151 "raid_level": "raid1", 00:13:04.151 "superblock": true, 00:13:04.151 "num_base_bdevs": 2, 00:13:04.151 "num_base_bdevs_discovered": 1, 00:13:04.151 "num_base_bdevs_operational": 1, 00:13:04.151 "base_bdevs_list": [ 00:13:04.151 { 00:13:04.151 "name": null, 00:13:04.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.151 "is_configured": false, 00:13:04.151 "data_offset": 0, 00:13:04.151 "data_size": 63488 00:13:04.151 }, 00:13:04.151 { 00:13:04.151 "name": "BaseBdev2", 00:13:04.151 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:04.151 "is_configured": true, 00:13:04.151 "data_offset": 2048, 00:13:04.151 "data_size": 63488 00:13:04.151 } 00:13:04.151 ] 00:13:04.151 }' 00:13:04.151 17:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.151 17:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.409 17:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:04.409 17:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.409 17:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.409 [2024-11-26 17:57:46.199687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.409 [2024-11-26 17:57:46.220865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:04.409 17:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.409 [2024-11-26 17:57:46.223148] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:04.409 17:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.788 "name": "raid_bdev1", 00:13:05.788 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:05.788 "strip_size_kb": 0, 00:13:05.788 "state": "online", 00:13:05.788 "raid_level": "raid1", 00:13:05.788 "superblock": true, 00:13:05.788 "num_base_bdevs": 2, 00:13:05.788 "num_base_bdevs_discovered": 2, 00:13:05.788 "num_base_bdevs_operational": 2, 00:13:05.788 "process": { 00:13:05.788 "type": "rebuild", 00:13:05.788 "target": "spare", 00:13:05.788 "progress": { 00:13:05.788 "blocks": 20480, 00:13:05.788 "percent": 32 00:13:05.788 } 00:13:05.788 }, 00:13:05.788 "base_bdevs_list": [ 00:13:05.788 { 00:13:05.788 "name": "spare", 00:13:05.788 "uuid": "d2fbf881-334b-54a0-95f9-79d099ca510a", 00:13:05.788 "is_configured": true, 00:13:05.788 "data_offset": 2048, 00:13:05.788 "data_size": 63488 00:13:05.788 }, 00:13:05.788 { 00:13:05.788 "name": "BaseBdev2", 00:13:05.788 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:05.788 "is_configured": true, 00:13:05.788 "data_offset": 2048, 00:13:05.788 "data_size": 63488 00:13:05.788 } 00:13:05.788 ] 00:13:05.788 }' 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.788 [2024-11-26 17:57:47.362534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:05.788 [2024-11-26 17:57:47.429709] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:05.788 [2024-11-26 17:57:47.429824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.788 [2024-11-26 17:57:47.429843] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:05.788 [2024-11-26 17:57:47.429859] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.788 "name": "raid_bdev1", 00:13:05.788 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:05.788 "strip_size_kb": 0, 00:13:05.788 "state": "online", 00:13:05.788 "raid_level": "raid1", 00:13:05.788 "superblock": true, 00:13:05.788 "num_base_bdevs": 2, 00:13:05.788 "num_base_bdevs_discovered": 1, 00:13:05.788 "num_base_bdevs_operational": 1, 00:13:05.788 "base_bdevs_list": [ 00:13:05.788 { 00:13:05.788 "name": null, 00:13:05.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.788 "is_configured": false, 00:13:05.788 "data_offset": 0, 00:13:05.788 "data_size": 63488 00:13:05.788 }, 00:13:05.788 { 00:13:05.788 "name": "BaseBdev2", 00:13:05.788 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:05.788 "is_configured": true, 00:13:05.788 "data_offset": 2048, 00:13:05.788 "data_size": 63488 00:13:05.788 } 00:13:05.788 ] 00:13:05.788 }' 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.788 17:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.358 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.358 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.358 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.358 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.358 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.358 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.358 17:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.358 17:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.358 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.358 17:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.358 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.358 "name": "raid_bdev1", 00:13:06.358 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:06.358 "strip_size_kb": 0, 00:13:06.358 "state": "online", 00:13:06.358 "raid_level": "raid1", 00:13:06.358 "superblock": true, 00:13:06.358 "num_base_bdevs": 2, 00:13:06.358 "num_base_bdevs_discovered": 1, 00:13:06.358 "num_base_bdevs_operational": 1, 00:13:06.358 "base_bdevs_list": [ 00:13:06.358 { 00:13:06.358 "name": null, 00:13:06.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.358 "is_configured": false, 00:13:06.358 "data_offset": 0, 00:13:06.358 "data_size": 63488 00:13:06.358 }, 00:13:06.358 { 00:13:06.358 "name": "BaseBdev2", 00:13:06.358 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:06.358 "is_configured": true, 00:13:06.358 "data_offset": 2048, 00:13:06.358 "data_size": 63488 00:13:06.358 } 00:13:06.358 ] 00:13:06.358 }' 00:13:06.358 17:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.358 17:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.358 17:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.358 17:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.358 17:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:06.358 17:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.358 17:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.358 [2024-11-26 17:57:48.091363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.358 [2024-11-26 17:57:48.109748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:06.358 17:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.358 17:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:06.358 [2024-11-26 17:57:48.112006] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:07.296 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.296 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.296 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.296 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.296 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.296 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.296 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.296 17:57:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.296 17:57:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.296 17:57:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.564 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.565 "name": "raid_bdev1", 00:13:07.565 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:07.565 "strip_size_kb": 0, 00:13:07.565 "state": "online", 00:13:07.565 "raid_level": "raid1", 00:13:07.565 "superblock": true, 00:13:07.565 "num_base_bdevs": 2, 00:13:07.565 "num_base_bdevs_discovered": 2, 00:13:07.565 "num_base_bdevs_operational": 2, 00:13:07.565 "process": { 00:13:07.565 "type": "rebuild", 00:13:07.565 "target": "spare", 00:13:07.565 "progress": { 00:13:07.565 "blocks": 20480, 00:13:07.565 "percent": 32 00:13:07.565 } 00:13:07.565 }, 00:13:07.565 "base_bdevs_list": [ 00:13:07.565 { 00:13:07.565 "name": "spare", 00:13:07.565 "uuid": "d2fbf881-334b-54a0-95f9-79d099ca510a", 00:13:07.565 "is_configured": true, 00:13:07.565 "data_offset": 2048, 00:13:07.565 "data_size": 63488 00:13:07.565 }, 00:13:07.565 { 00:13:07.565 "name": "BaseBdev2", 00:13:07.565 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:07.565 "is_configured": true, 00:13:07.565 "data_offset": 2048, 00:13:07.565 "data_size": 63488 00:13:07.565 } 00:13:07.565 ] 00:13:07.565 }' 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:07.565 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=410 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.565 "name": "raid_bdev1", 00:13:07.565 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:07.565 "strip_size_kb": 0, 00:13:07.565 "state": "online", 00:13:07.565 "raid_level": "raid1", 00:13:07.565 "superblock": true, 00:13:07.565 "num_base_bdevs": 2, 00:13:07.565 "num_base_bdevs_discovered": 2, 00:13:07.565 "num_base_bdevs_operational": 2, 00:13:07.565 "process": { 00:13:07.565 "type": "rebuild", 00:13:07.565 "target": "spare", 00:13:07.565 "progress": { 00:13:07.565 "blocks": 22528, 00:13:07.565 "percent": 35 00:13:07.565 } 00:13:07.565 }, 00:13:07.565 "base_bdevs_list": [ 00:13:07.565 { 00:13:07.565 "name": "spare", 00:13:07.565 "uuid": "d2fbf881-334b-54a0-95f9-79d099ca510a", 00:13:07.565 "is_configured": true, 00:13:07.565 "data_offset": 2048, 00:13:07.565 "data_size": 63488 00:13:07.565 }, 00:13:07.565 { 00:13:07.565 "name": "BaseBdev2", 00:13:07.565 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:07.565 "is_configured": true, 00:13:07.565 "data_offset": 2048, 00:13:07.565 "data_size": 63488 00:13:07.565 } 00:13:07.565 ] 00:13:07.565 }' 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.565 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.566 17:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.946 "name": "raid_bdev1", 00:13:08.946 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:08.946 "strip_size_kb": 0, 00:13:08.946 "state": "online", 00:13:08.946 "raid_level": "raid1", 00:13:08.946 "superblock": true, 00:13:08.946 "num_base_bdevs": 2, 00:13:08.946 "num_base_bdevs_discovered": 2, 00:13:08.946 "num_base_bdevs_operational": 2, 00:13:08.946 "process": { 00:13:08.946 "type": "rebuild", 00:13:08.946 "target": "spare", 00:13:08.946 "progress": { 00:13:08.946 "blocks": 45056, 00:13:08.946 "percent": 70 00:13:08.946 } 00:13:08.946 }, 00:13:08.946 "base_bdevs_list": [ 00:13:08.946 { 00:13:08.946 "name": "spare", 00:13:08.946 "uuid": "d2fbf881-334b-54a0-95f9-79d099ca510a", 00:13:08.946 "is_configured": true, 00:13:08.946 "data_offset": 2048, 00:13:08.946 "data_size": 63488 00:13:08.946 }, 00:13:08.946 { 00:13:08.946 "name": "BaseBdev2", 00:13:08.946 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:08.946 "is_configured": true, 00:13:08.946 "data_offset": 2048, 00:13:08.946 "data_size": 63488 00:13:08.946 } 00:13:08.946 ] 00:13:08.946 }' 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.946 17:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:09.515 [2024-11-26 17:57:51.228763] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:09.515 [2024-11-26 17:57:51.228876] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:09.515 [2024-11-26 17:57:51.229061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.774 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:09.774 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.774 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.774 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.774 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.774 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.774 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.774 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.774 17:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.774 17:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.774 17:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.774 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.774 "name": "raid_bdev1", 00:13:09.774 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:09.774 "strip_size_kb": 0, 00:13:09.774 "state": "online", 00:13:09.774 "raid_level": "raid1", 00:13:09.774 "superblock": true, 00:13:09.774 "num_base_bdevs": 2, 00:13:09.774 "num_base_bdevs_discovered": 2, 00:13:09.774 "num_base_bdevs_operational": 2, 00:13:09.774 "base_bdevs_list": [ 00:13:09.774 { 00:13:09.774 "name": "spare", 00:13:09.774 "uuid": "d2fbf881-334b-54a0-95f9-79d099ca510a", 00:13:09.774 "is_configured": true, 00:13:09.774 "data_offset": 2048, 00:13:09.774 "data_size": 63488 00:13:09.774 }, 00:13:09.774 { 00:13:09.774 "name": "BaseBdev2", 00:13:09.774 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:09.774 "is_configured": true, 00:13:09.774 "data_offset": 2048, 00:13:09.774 "data_size": 63488 00:13:09.774 } 00:13:09.774 ] 00:13:09.774 }' 00:13:09.774 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.034 "name": "raid_bdev1", 00:13:10.034 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:10.034 "strip_size_kb": 0, 00:13:10.034 "state": "online", 00:13:10.034 "raid_level": "raid1", 00:13:10.034 "superblock": true, 00:13:10.034 "num_base_bdevs": 2, 00:13:10.034 "num_base_bdevs_discovered": 2, 00:13:10.034 "num_base_bdevs_operational": 2, 00:13:10.034 "base_bdevs_list": [ 00:13:10.034 { 00:13:10.034 "name": "spare", 00:13:10.034 "uuid": "d2fbf881-334b-54a0-95f9-79d099ca510a", 00:13:10.034 "is_configured": true, 00:13:10.034 "data_offset": 2048, 00:13:10.034 "data_size": 63488 00:13:10.034 }, 00:13:10.034 { 00:13:10.034 "name": "BaseBdev2", 00:13:10.034 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:10.034 "is_configured": true, 00:13:10.034 "data_offset": 2048, 00:13:10.034 "data_size": 63488 00:13:10.034 } 00:13:10.034 ] 00:13:10.034 }' 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.034 "name": "raid_bdev1", 00:13:10.034 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:10.034 "strip_size_kb": 0, 00:13:10.034 "state": "online", 00:13:10.034 "raid_level": "raid1", 00:13:10.034 "superblock": true, 00:13:10.034 "num_base_bdevs": 2, 00:13:10.034 "num_base_bdevs_discovered": 2, 00:13:10.034 "num_base_bdevs_operational": 2, 00:13:10.034 "base_bdevs_list": [ 00:13:10.034 { 00:13:10.034 "name": "spare", 00:13:10.034 "uuid": "d2fbf881-334b-54a0-95f9-79d099ca510a", 00:13:10.034 "is_configured": true, 00:13:10.034 "data_offset": 2048, 00:13:10.034 "data_size": 63488 00:13:10.034 }, 00:13:10.034 { 00:13:10.034 "name": "BaseBdev2", 00:13:10.034 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:10.034 "is_configured": true, 00:13:10.034 "data_offset": 2048, 00:13:10.034 "data_size": 63488 00:13:10.034 } 00:13:10.034 ] 00:13:10.034 }' 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.034 17:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.603 [2024-11-26 17:57:52.226455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:10.603 [2024-11-26 17:57:52.226500] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.603 [2024-11-26 17:57:52.226613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.603 [2024-11-26 17:57:52.226710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.603 [2024-11-26 17:57:52.226729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:10.603 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:10.863 /dev/nbd0 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.863 1+0 records in 00:13:10.863 1+0 records out 00:13:10.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395651 s, 10.4 MB/s 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:10.863 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:11.123 /dev/nbd1 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.123 1+0 records in 00:13:11.123 1+0 records out 00:13:11.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489739 s, 8.4 MB/s 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:11.123 17:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:11.382 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:11.382 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.382 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:11.382 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:11.383 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:11.383 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.383 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:11.641 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:11.641 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:11.641 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:11.641 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.641 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.641 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:11.641 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:11.641 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.641 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.641 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.902 [2024-11-26 17:57:53.657422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:11.902 [2024-11-26 17:57:53.657490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.902 [2024-11-26 17:57:53.657520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:11.902 [2024-11-26 17:57:53.657530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.902 [2024-11-26 17:57:53.660203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.902 [2024-11-26 17:57:53.660240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:11.902 [2024-11-26 17:57:53.660359] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:11.902 [2024-11-26 17:57:53.660416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.902 [2024-11-26 17:57:53.660584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.902 spare 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.902 17:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.902 [2024-11-26 17:57:53.760509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:11.902 [2024-11-26 17:57:53.760585] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:11.902 [2024-11-26 17:57:53.761003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:11.902 [2024-11-26 17:57:53.761312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:11.902 [2024-11-26 17:57:53.761342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:11.902 [2024-11-26 17:57:53.761596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.161 "name": "raid_bdev1", 00:13:12.161 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:12.161 "strip_size_kb": 0, 00:13:12.161 "state": "online", 00:13:12.161 "raid_level": "raid1", 00:13:12.161 "superblock": true, 00:13:12.161 "num_base_bdevs": 2, 00:13:12.161 "num_base_bdevs_discovered": 2, 00:13:12.161 "num_base_bdevs_operational": 2, 00:13:12.161 "base_bdevs_list": [ 00:13:12.161 { 00:13:12.161 "name": "spare", 00:13:12.161 "uuid": "d2fbf881-334b-54a0-95f9-79d099ca510a", 00:13:12.161 "is_configured": true, 00:13:12.161 "data_offset": 2048, 00:13:12.161 "data_size": 63488 00:13:12.161 }, 00:13:12.161 { 00:13:12.161 "name": "BaseBdev2", 00:13:12.161 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:12.161 "is_configured": true, 00:13:12.161 "data_offset": 2048, 00:13:12.161 "data_size": 63488 00:13:12.161 } 00:13:12.161 ] 00:13:12.161 }' 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.161 17:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.420 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.420 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.420 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.420 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.420 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.420 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.420 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.420 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.420 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.420 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.420 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.420 "name": "raid_bdev1", 00:13:12.420 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:12.420 "strip_size_kb": 0, 00:13:12.420 "state": "online", 00:13:12.420 "raid_level": "raid1", 00:13:12.420 "superblock": true, 00:13:12.420 "num_base_bdevs": 2, 00:13:12.420 "num_base_bdevs_discovered": 2, 00:13:12.420 "num_base_bdevs_operational": 2, 00:13:12.420 "base_bdevs_list": [ 00:13:12.420 { 00:13:12.420 "name": "spare", 00:13:12.420 "uuid": "d2fbf881-334b-54a0-95f9-79d099ca510a", 00:13:12.420 "is_configured": true, 00:13:12.420 "data_offset": 2048, 00:13:12.420 "data_size": 63488 00:13:12.420 }, 00:13:12.420 { 00:13:12.420 "name": "BaseBdev2", 00:13:12.420 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:12.420 "is_configured": true, 00:13:12.420 "data_offset": 2048, 00:13:12.420 "data_size": 63488 00:13:12.420 } 00:13:12.420 ] 00:13:12.420 }' 00:13:12.420 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.790 [2024-11-26 17:57:54.416684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.790 "name": "raid_bdev1", 00:13:12.790 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:12.790 "strip_size_kb": 0, 00:13:12.790 "state": "online", 00:13:12.790 "raid_level": "raid1", 00:13:12.790 "superblock": true, 00:13:12.790 "num_base_bdevs": 2, 00:13:12.790 "num_base_bdevs_discovered": 1, 00:13:12.790 "num_base_bdevs_operational": 1, 00:13:12.790 "base_bdevs_list": [ 00:13:12.790 { 00:13:12.790 "name": null, 00:13:12.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.790 "is_configured": false, 00:13:12.790 "data_offset": 0, 00:13:12.790 "data_size": 63488 00:13:12.790 }, 00:13:12.790 { 00:13:12.790 "name": "BaseBdev2", 00:13:12.790 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:12.790 "is_configured": true, 00:13:12.790 "data_offset": 2048, 00:13:12.790 "data_size": 63488 00:13:12.790 } 00:13:12.790 ] 00:13:12.790 }' 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.790 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.051 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:13.051 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.051 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.051 [2024-11-26 17:57:54.880001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.051 [2024-11-26 17:57:54.880257] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:13.051 [2024-11-26 17:57:54.880285] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:13.051 [2024-11-26 17:57:54.880325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.051 [2024-11-26 17:57:54.898433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:13.051 17:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.051 17:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:13.051 [2024-11-26 17:57:54.900545] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.431 17:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.431 17:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.431 17:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.431 17:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.431 17:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.431 17:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.431 17:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.431 17:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.431 17:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.431 17:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.431 17:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.431 "name": "raid_bdev1", 00:13:14.431 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:14.431 "strip_size_kb": 0, 00:13:14.431 "state": "online", 00:13:14.431 "raid_level": "raid1", 00:13:14.431 "superblock": true, 00:13:14.431 "num_base_bdevs": 2, 00:13:14.431 "num_base_bdevs_discovered": 2, 00:13:14.431 "num_base_bdevs_operational": 2, 00:13:14.431 "process": { 00:13:14.431 "type": "rebuild", 00:13:14.431 "target": "spare", 00:13:14.431 "progress": { 00:13:14.431 "blocks": 20480, 00:13:14.431 "percent": 32 00:13:14.431 } 00:13:14.431 }, 00:13:14.431 "base_bdevs_list": [ 00:13:14.431 { 00:13:14.431 "name": "spare", 00:13:14.431 "uuid": "d2fbf881-334b-54a0-95f9-79d099ca510a", 00:13:14.431 "is_configured": true, 00:13:14.431 "data_offset": 2048, 00:13:14.431 "data_size": 63488 00:13:14.431 }, 00:13:14.431 { 00:13:14.431 "name": "BaseBdev2", 00:13:14.431 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:14.432 "is_configured": true, 00:13:14.432 "data_offset": 2048, 00:13:14.432 "data_size": 63488 00:13:14.432 } 00:13:14.432 ] 00:13:14.432 }' 00:13:14.432 17:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.432 [2024-11-26 17:57:56.068409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.432 [2024-11-26 17:57:56.106882] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:14.432 [2024-11-26 17:57:56.106951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.432 [2024-11-26 17:57:56.106966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.432 [2024-11-26 17:57:56.106975] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.432 "name": "raid_bdev1", 00:13:14.432 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:14.432 "strip_size_kb": 0, 00:13:14.432 "state": "online", 00:13:14.432 "raid_level": "raid1", 00:13:14.432 "superblock": true, 00:13:14.432 "num_base_bdevs": 2, 00:13:14.432 "num_base_bdevs_discovered": 1, 00:13:14.432 "num_base_bdevs_operational": 1, 00:13:14.432 "base_bdevs_list": [ 00:13:14.432 { 00:13:14.432 "name": null, 00:13:14.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.432 "is_configured": false, 00:13:14.432 "data_offset": 0, 00:13:14.432 "data_size": 63488 00:13:14.432 }, 00:13:14.432 { 00:13:14.432 "name": "BaseBdev2", 00:13:14.432 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:14.432 "is_configured": true, 00:13:14.432 "data_offset": 2048, 00:13:14.432 "data_size": 63488 00:13:14.432 } 00:13:14.432 ] 00:13:14.432 }' 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.432 17:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.002 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:15.002 17:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.002 17:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.002 [2024-11-26 17:57:56.632252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:15.002 [2024-11-26 17:57:56.632326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.002 [2024-11-26 17:57:56.632352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:15.002 [2024-11-26 17:57:56.632364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.002 [2024-11-26 17:57:56.632886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.002 [2024-11-26 17:57:56.632911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:15.002 [2024-11-26 17:57:56.633014] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:15.002 [2024-11-26 17:57:56.633045] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:15.002 [2024-11-26 17:57:56.633056] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:15.002 [2024-11-26 17:57:56.633087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:15.002 [2024-11-26 17:57:56.650559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:15.002 spare 00:13:15.002 17:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.002 17:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:15.002 [2024-11-26 17:57:56.652564] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:15.940 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.940 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.940 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.940 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.940 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.940 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.940 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.940 17:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.940 17:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.940 17:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.940 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.940 "name": "raid_bdev1", 00:13:15.940 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:15.940 "strip_size_kb": 0, 00:13:15.940 "state": "online", 00:13:15.940 "raid_level": "raid1", 00:13:15.940 "superblock": true, 00:13:15.940 "num_base_bdevs": 2, 00:13:15.940 "num_base_bdevs_discovered": 2, 00:13:15.940 "num_base_bdevs_operational": 2, 00:13:15.940 "process": { 00:13:15.940 "type": "rebuild", 00:13:15.940 "target": "spare", 00:13:15.940 "progress": { 00:13:15.940 "blocks": 20480, 00:13:15.940 "percent": 32 00:13:15.940 } 00:13:15.940 }, 00:13:15.940 "base_bdevs_list": [ 00:13:15.940 { 00:13:15.940 "name": "spare", 00:13:15.940 "uuid": "d2fbf881-334b-54a0-95f9-79d099ca510a", 00:13:15.940 "is_configured": true, 00:13:15.940 "data_offset": 2048, 00:13:15.940 "data_size": 63488 00:13:15.940 }, 00:13:15.940 { 00:13:15.940 "name": "BaseBdev2", 00:13:15.940 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:15.940 "is_configured": true, 00:13:15.940 "data_offset": 2048, 00:13:15.940 "data_size": 63488 00:13:15.940 } 00:13:15.940 ] 00:13:15.940 }' 00:13:15.940 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.940 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.940 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.200 [2024-11-26 17:57:57.836468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.200 [2024-11-26 17:57:57.858697] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:16.200 [2024-11-26 17:57:57.858763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.200 [2024-11-26 17:57:57.858781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.200 [2024-11-26 17:57:57.858788] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.200 17:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.201 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.201 "name": "raid_bdev1", 00:13:16.201 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:16.201 "strip_size_kb": 0, 00:13:16.201 "state": "online", 00:13:16.201 "raid_level": "raid1", 00:13:16.201 "superblock": true, 00:13:16.201 "num_base_bdevs": 2, 00:13:16.201 "num_base_bdevs_discovered": 1, 00:13:16.201 "num_base_bdevs_operational": 1, 00:13:16.201 "base_bdevs_list": [ 00:13:16.201 { 00:13:16.201 "name": null, 00:13:16.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.201 "is_configured": false, 00:13:16.201 "data_offset": 0, 00:13:16.201 "data_size": 63488 00:13:16.201 }, 00:13:16.201 { 00:13:16.201 "name": "BaseBdev2", 00:13:16.201 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:16.201 "is_configured": true, 00:13:16.201 "data_offset": 2048, 00:13:16.201 "data_size": 63488 00:13:16.201 } 00:13:16.201 ] 00:13:16.201 }' 00:13:16.201 17:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.201 17:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.771 "name": "raid_bdev1", 00:13:16.771 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:16.771 "strip_size_kb": 0, 00:13:16.771 "state": "online", 00:13:16.771 "raid_level": "raid1", 00:13:16.771 "superblock": true, 00:13:16.771 "num_base_bdevs": 2, 00:13:16.771 "num_base_bdevs_discovered": 1, 00:13:16.771 "num_base_bdevs_operational": 1, 00:13:16.771 "base_bdevs_list": [ 00:13:16.771 { 00:13:16.771 "name": null, 00:13:16.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.771 "is_configured": false, 00:13:16.771 "data_offset": 0, 00:13:16.771 "data_size": 63488 00:13:16.771 }, 00:13:16.771 { 00:13:16.771 "name": "BaseBdev2", 00:13:16.771 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:16.771 "is_configured": true, 00:13:16.771 "data_offset": 2048, 00:13:16.771 "data_size": 63488 00:13:16.771 } 00:13:16.771 ] 00:13:16.771 }' 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.771 17:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.771 [2024-11-26 17:57:58.551014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:16.771 [2024-11-26 17:57:58.551098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.771 [2024-11-26 17:57:58.551131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:16.771 [2024-11-26 17:57:58.551154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.772 [2024-11-26 17:57:58.551687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.772 [2024-11-26 17:57:58.551707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:16.772 [2024-11-26 17:57:58.551804] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:16.772 [2024-11-26 17:57:58.551831] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:16.772 [2024-11-26 17:57:58.551842] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:16.772 [2024-11-26 17:57:58.551854] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:16.772 BaseBdev1 00:13:16.772 17:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.772 17:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:17.711 17:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:17.711 17:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.711 17:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.711 17:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.711 17:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.711 17:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:17.711 17:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.711 17:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.711 17:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.711 17:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.711 17:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.711 17:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.711 17:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.711 17:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.970 17:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.970 17:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.970 "name": "raid_bdev1", 00:13:17.970 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:17.970 "strip_size_kb": 0, 00:13:17.970 "state": "online", 00:13:17.970 "raid_level": "raid1", 00:13:17.970 "superblock": true, 00:13:17.970 "num_base_bdevs": 2, 00:13:17.970 "num_base_bdevs_discovered": 1, 00:13:17.970 "num_base_bdevs_operational": 1, 00:13:17.970 "base_bdevs_list": [ 00:13:17.970 { 00:13:17.970 "name": null, 00:13:17.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.970 "is_configured": false, 00:13:17.970 "data_offset": 0, 00:13:17.970 "data_size": 63488 00:13:17.970 }, 00:13:17.970 { 00:13:17.970 "name": "BaseBdev2", 00:13:17.970 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:17.970 "is_configured": true, 00:13:17.970 "data_offset": 2048, 00:13:17.970 "data_size": 63488 00:13:17.970 } 00:13:17.970 ] 00:13:17.970 }' 00:13:17.970 17:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.970 17:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.230 17:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.230 17:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.230 17:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.230 17:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.230 17:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.230 17:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.230 17:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.230 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.230 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.488 "name": "raid_bdev1", 00:13:18.488 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:18.488 "strip_size_kb": 0, 00:13:18.488 "state": "online", 00:13:18.488 "raid_level": "raid1", 00:13:18.488 "superblock": true, 00:13:18.488 "num_base_bdevs": 2, 00:13:18.488 "num_base_bdevs_discovered": 1, 00:13:18.488 "num_base_bdevs_operational": 1, 00:13:18.488 "base_bdevs_list": [ 00:13:18.488 { 00:13:18.488 "name": null, 00:13:18.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.488 "is_configured": false, 00:13:18.488 "data_offset": 0, 00:13:18.488 "data_size": 63488 00:13:18.488 }, 00:13:18.488 { 00:13:18.488 "name": "BaseBdev2", 00:13:18.488 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:18.488 "is_configured": true, 00:13:18.488 "data_offset": 2048, 00:13:18.488 "data_size": 63488 00:13:18.488 } 00:13:18.488 ] 00:13:18.488 }' 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.488 [2024-11-26 17:58:00.200457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.488 [2024-11-26 17:58:00.200665] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:18.488 [2024-11-26 17:58:00.200692] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:18.488 request: 00:13:18.488 { 00:13:18.488 "base_bdev": "BaseBdev1", 00:13:18.488 "raid_bdev": "raid_bdev1", 00:13:18.488 "method": "bdev_raid_add_base_bdev", 00:13:18.488 "req_id": 1 00:13:18.488 } 00:13:18.488 Got JSON-RPC error response 00:13:18.488 response: 00:13:18.488 { 00:13:18.488 "code": -22, 00:13:18.488 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:18.488 } 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:18.488 17:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.422 "name": "raid_bdev1", 00:13:19.422 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:19.422 "strip_size_kb": 0, 00:13:19.422 "state": "online", 00:13:19.422 "raid_level": "raid1", 00:13:19.422 "superblock": true, 00:13:19.422 "num_base_bdevs": 2, 00:13:19.422 "num_base_bdevs_discovered": 1, 00:13:19.422 "num_base_bdevs_operational": 1, 00:13:19.422 "base_bdevs_list": [ 00:13:19.422 { 00:13:19.422 "name": null, 00:13:19.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.422 "is_configured": false, 00:13:19.422 "data_offset": 0, 00:13:19.422 "data_size": 63488 00:13:19.422 }, 00:13:19.422 { 00:13:19.422 "name": "BaseBdev2", 00:13:19.422 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:19.422 "is_configured": true, 00:13:19.422 "data_offset": 2048, 00:13:19.422 "data_size": 63488 00:13:19.422 } 00:13:19.422 ] 00:13:19.422 }' 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.422 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.989 "name": "raid_bdev1", 00:13:19.989 "uuid": "7c73cdc7-85ed-40b5-aa9d-f07c2a979855", 00:13:19.989 "strip_size_kb": 0, 00:13:19.989 "state": "online", 00:13:19.989 "raid_level": "raid1", 00:13:19.989 "superblock": true, 00:13:19.989 "num_base_bdevs": 2, 00:13:19.989 "num_base_bdevs_discovered": 1, 00:13:19.989 "num_base_bdevs_operational": 1, 00:13:19.989 "base_bdevs_list": [ 00:13:19.989 { 00:13:19.989 "name": null, 00:13:19.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.989 "is_configured": false, 00:13:19.989 "data_offset": 0, 00:13:19.989 "data_size": 63488 00:13:19.989 }, 00:13:19.989 { 00:13:19.989 "name": "BaseBdev2", 00:13:19.989 "uuid": "ee63d305-a6ea-5455-b021-02a92c9c787f", 00:13:19.989 "is_configured": true, 00:13:19.989 "data_offset": 2048, 00:13:19.989 "data_size": 63488 00:13:19.989 } 00:13:19.989 ] 00:13:19.989 }' 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76069 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76069 ']' 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76069 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76069 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:19.989 killing process with pid 76069 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76069' 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76069 00:13:19.989 Received shutdown signal, test time was about 60.000000 seconds 00:13:19.989 00:13:19.989 Latency(us) 00:13:19.989 [2024-11-26T17:58:01.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.989 [2024-11-26T17:58:01.852Z] =================================================================================================================== 00:13:19.989 [2024-11-26T17:58:01.852Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:19.989 17:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76069 00:13:19.989 [2024-11-26 17:58:01.763547] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:19.989 [2024-11-26 17:58:01.763701] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.989 [2024-11-26 17:58:01.763772] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.989 [2024-11-26 17:58:01.763791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:20.556 [2024-11-26 17:58:02.110367] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:21.932 00:13:21.932 real 0m24.942s 00:13:21.932 user 0m30.242s 00:13:21.932 sys 0m4.089s 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.932 ************************************ 00:13:21.932 END TEST raid_rebuild_test_sb 00:13:21.932 ************************************ 00:13:21.932 17:58:03 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:21.932 17:58:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:21.932 17:58:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.932 17:58:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:21.932 ************************************ 00:13:21.932 START TEST raid_rebuild_test_io 00:13:21.932 ************************************ 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:21.932 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:21.933 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:21.933 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:21.933 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76817 00:13:21.933 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76817 00:13:21.933 17:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:21.933 17:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76817 ']' 00:13:21.933 17:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.933 17:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.933 17:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.933 17:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.933 17:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.933 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:21.933 Zero copy mechanism will not be used. 00:13:21.933 [2024-11-26 17:58:03.585658] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:13:21.933 [2024-11-26 17:58:03.585783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76817 ] 00:13:21.933 [2024-11-26 17:58:03.764617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.192 [2024-11-26 17:58:03.889207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.450 [2024-11-26 17:58:04.123492] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.450 [2024-11-26 17:58:04.123569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.709 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.709 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:22.709 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:22.709 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:22.709 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.709 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.709 BaseBdev1_malloc 00:13:22.709 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.709 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:22.709 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.709 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.709 [2024-11-26 17:58:04.519105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:22.709 [2024-11-26 17:58:04.519180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.709 [2024-11-26 17:58:04.519207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:22.709 [2024-11-26 17:58:04.519221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.709 [2024-11-26 17:58:04.521766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.709 [2024-11-26 17:58:04.521815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:22.709 BaseBdev1 00:13:22.709 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.709 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:22.709 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:22.709 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.709 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.969 BaseBdev2_malloc 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.969 [2024-11-26 17:58:04.580318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:22.969 [2024-11-26 17:58:04.580405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.969 [2024-11-26 17:58:04.580433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:22.969 [2024-11-26 17:58:04.580446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.969 [2024-11-26 17:58:04.582981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.969 [2024-11-26 17:58:04.583041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:22.969 BaseBdev2 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.969 spare_malloc 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.969 spare_delay 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.969 [2024-11-26 17:58:04.659501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:22.969 [2024-11-26 17:58:04.659574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.969 [2024-11-26 17:58:04.659601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:22.969 [2024-11-26 17:58:04.659613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.969 [2024-11-26 17:58:04.662115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.969 [2024-11-26 17:58:04.662160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:22.969 spare 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.969 [2024-11-26 17:58:04.671567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.969 [2024-11-26 17:58:04.673769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.969 [2024-11-26 17:58:04.673910] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:22.969 [2024-11-26 17:58:04.673928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:22.969 [2024-11-26 17:58:04.674279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:22.969 [2024-11-26 17:58:04.674487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:22.969 [2024-11-26 17:58:04.674508] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:22.969 [2024-11-26 17:58:04.674728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.969 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.969 "name": "raid_bdev1", 00:13:22.969 "uuid": "38c63e55-f6d2-4b32-8320-b41a1b907292", 00:13:22.969 "strip_size_kb": 0, 00:13:22.969 "state": "online", 00:13:22.969 "raid_level": "raid1", 00:13:22.969 "superblock": false, 00:13:22.969 "num_base_bdevs": 2, 00:13:22.969 "num_base_bdevs_discovered": 2, 00:13:22.969 "num_base_bdevs_operational": 2, 00:13:22.969 "base_bdevs_list": [ 00:13:22.969 { 00:13:22.969 "name": "BaseBdev1", 00:13:22.969 "uuid": "356549e9-e2fd-5502-823a-9977b2dc017d", 00:13:22.969 "is_configured": true, 00:13:22.969 "data_offset": 0, 00:13:22.969 "data_size": 65536 00:13:22.969 }, 00:13:22.969 { 00:13:22.969 "name": "BaseBdev2", 00:13:22.969 "uuid": "eab57f1f-3e33-5e17-8876-616d7605bbe2", 00:13:22.969 "is_configured": true, 00:13:22.969 "data_offset": 0, 00:13:22.969 "data_size": 65536 00:13:22.969 } 00:13:22.969 ] 00:13:22.969 }' 00:13:22.970 17:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.970 17:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.537 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:23.537 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:23.537 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.537 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.537 [2024-11-26 17:58:05.167049] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.537 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.537 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.538 [2024-11-26 17:58:05.258549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.538 "name": "raid_bdev1", 00:13:23.538 "uuid": "38c63e55-f6d2-4b32-8320-b41a1b907292", 00:13:23.538 "strip_size_kb": 0, 00:13:23.538 "state": "online", 00:13:23.538 "raid_level": "raid1", 00:13:23.538 "superblock": false, 00:13:23.538 "num_base_bdevs": 2, 00:13:23.538 "num_base_bdevs_discovered": 1, 00:13:23.538 "num_base_bdevs_operational": 1, 00:13:23.538 "base_bdevs_list": [ 00:13:23.538 { 00:13:23.538 "name": null, 00:13:23.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.538 "is_configured": false, 00:13:23.538 "data_offset": 0, 00:13:23.538 "data_size": 65536 00:13:23.538 }, 00:13:23.538 { 00:13:23.538 "name": "BaseBdev2", 00:13:23.538 "uuid": "eab57f1f-3e33-5e17-8876-616d7605bbe2", 00:13:23.538 "is_configured": true, 00:13:23.538 "data_offset": 0, 00:13:23.538 "data_size": 65536 00:13:23.538 } 00:13:23.538 ] 00:13:23.538 }' 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.538 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.538 [2024-11-26 17:58:05.355799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:23.538 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:23.538 Zero copy mechanism will not be used. 00:13:23.538 Running I/O for 60 seconds... 00:13:24.104 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:24.104 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.104 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.104 [2024-11-26 17:58:05.712854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.104 17:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.104 17:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:24.104 [2024-11-26 17:58:05.802330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:24.104 [2024-11-26 17:58:05.804695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:24.104 [2024-11-26 17:58:05.906926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:24.104 [2024-11-26 17:58:05.907578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:24.363 [2024-11-26 17:58:06.046344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.363 [2024-11-26 17:58:06.046816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.621 121.00 IOPS, 363.00 MiB/s [2024-11-26T17:58:06.484Z] [2024-11-26 17:58:06.392953] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.190 "name": "raid_bdev1", 00:13:25.190 "uuid": "38c63e55-f6d2-4b32-8320-b41a1b907292", 00:13:25.190 "strip_size_kb": 0, 00:13:25.190 "state": "online", 00:13:25.190 "raid_level": "raid1", 00:13:25.190 "superblock": false, 00:13:25.190 "num_base_bdevs": 2, 00:13:25.190 "num_base_bdevs_discovered": 2, 00:13:25.190 "num_base_bdevs_operational": 2, 00:13:25.190 "process": { 00:13:25.190 "type": "rebuild", 00:13:25.190 "target": "spare", 00:13:25.190 "progress": { 00:13:25.190 "blocks": 14336, 00:13:25.190 "percent": 21 00:13:25.190 } 00:13:25.190 }, 00:13:25.190 "base_bdevs_list": [ 00:13:25.190 { 00:13:25.190 "name": "spare", 00:13:25.190 "uuid": "7bc00952-a834-5e45-b7be-8f484dbe2757", 00:13:25.190 "is_configured": true, 00:13:25.190 "data_offset": 0, 00:13:25.190 "data_size": 65536 00:13:25.190 }, 00:13:25.190 { 00:13:25.190 "name": "BaseBdev2", 00:13:25.190 "uuid": "eab57f1f-3e33-5e17-8876-616d7605bbe2", 00:13:25.190 "is_configured": true, 00:13:25.190 "data_offset": 0, 00:13:25.190 "data_size": 65536 00:13:25.190 } 00:13:25.190 ] 00:13:25.190 }' 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.190 [2024-11-26 17:58:06.859314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.190 17:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.190 [2024-11-26 17:58:06.927328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.190 [2024-11-26 17:58:06.969482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:25.190 [2024-11-26 17:58:06.970441] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:25.190 [2024-11-26 17:58:06.979267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.190 [2024-11-26 17:58:06.979334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.190 [2024-11-26 17:58:06.979348] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:25.190 [2024-11-26 17:58:07.041952] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.450 "name": "raid_bdev1", 00:13:25.450 "uuid": "38c63e55-f6d2-4b32-8320-b41a1b907292", 00:13:25.450 "strip_size_kb": 0, 00:13:25.450 "state": "online", 00:13:25.450 "raid_level": "raid1", 00:13:25.450 "superblock": false, 00:13:25.450 "num_base_bdevs": 2, 00:13:25.450 "num_base_bdevs_discovered": 1, 00:13:25.450 "num_base_bdevs_operational": 1, 00:13:25.450 "base_bdevs_list": [ 00:13:25.450 { 00:13:25.450 "name": null, 00:13:25.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.450 "is_configured": false, 00:13:25.450 "data_offset": 0, 00:13:25.450 "data_size": 65536 00:13:25.450 }, 00:13:25.450 { 00:13:25.450 "name": "BaseBdev2", 00:13:25.450 "uuid": "eab57f1f-3e33-5e17-8876-616d7605bbe2", 00:13:25.450 "is_configured": true, 00:13:25.450 "data_offset": 0, 00:13:25.450 "data_size": 65536 00:13:25.450 } 00:13:25.450 ] 00:13:25.450 }' 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.450 17:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.711 125.50 IOPS, 376.50 MiB/s [2024-11-26T17:58:07.574Z] 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.711 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.711 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.711 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.711 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.711 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.711 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.711 17:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.711 17:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.971 17:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.971 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.971 "name": "raid_bdev1", 00:13:25.971 "uuid": "38c63e55-f6d2-4b32-8320-b41a1b907292", 00:13:25.971 "strip_size_kb": 0, 00:13:25.971 "state": "online", 00:13:25.971 "raid_level": "raid1", 00:13:25.971 "superblock": false, 00:13:25.971 "num_base_bdevs": 2, 00:13:25.971 "num_base_bdevs_discovered": 1, 00:13:25.971 "num_base_bdevs_operational": 1, 00:13:25.971 "base_bdevs_list": [ 00:13:25.971 { 00:13:25.971 "name": null, 00:13:25.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.971 "is_configured": false, 00:13:25.971 "data_offset": 0, 00:13:25.971 "data_size": 65536 00:13:25.971 }, 00:13:25.971 { 00:13:25.971 "name": "BaseBdev2", 00:13:25.971 "uuid": "eab57f1f-3e33-5e17-8876-616d7605bbe2", 00:13:25.971 "is_configured": true, 00:13:25.971 "data_offset": 0, 00:13:25.971 "data_size": 65536 00:13:25.971 } 00:13:25.971 ] 00:13:25.971 }' 00:13:25.971 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.971 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.971 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.971 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.971 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:25.971 17:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.971 17:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.971 [2024-11-26 17:58:07.719557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.971 17:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.971 17:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:25.971 [2024-11-26 17:58:07.780075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:25.971 [2024-11-26 17:58:07.782335] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:26.231 [2024-11-26 17:58:07.906084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:26.231 [2024-11-26 17:58:07.906726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:26.489 [2024-11-26 17:58:08.132391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:26.489 [2024-11-26 17:58:08.132766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:27.007 140.33 IOPS, 421.00 MiB/s [2024-11-26T17:58:08.870Z] 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.007 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.007 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.007 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.007 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.007 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.007 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.007 17:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.007 17:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.007 17:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.007 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.007 "name": "raid_bdev1", 00:13:27.007 "uuid": "38c63e55-f6d2-4b32-8320-b41a1b907292", 00:13:27.007 "strip_size_kb": 0, 00:13:27.007 "state": "online", 00:13:27.007 "raid_level": "raid1", 00:13:27.007 "superblock": false, 00:13:27.007 "num_base_bdevs": 2, 00:13:27.007 "num_base_bdevs_discovered": 2, 00:13:27.007 "num_base_bdevs_operational": 2, 00:13:27.007 "process": { 00:13:27.007 "type": "rebuild", 00:13:27.007 "target": "spare", 00:13:27.007 "progress": { 00:13:27.007 "blocks": 14336, 00:13:27.007 "percent": 21 00:13:27.007 } 00:13:27.007 }, 00:13:27.007 "base_bdevs_list": [ 00:13:27.007 { 00:13:27.007 "name": "spare", 00:13:27.007 "uuid": "7bc00952-a834-5e45-b7be-8f484dbe2757", 00:13:27.007 "is_configured": true, 00:13:27.007 "data_offset": 0, 00:13:27.007 "data_size": 65536 00:13:27.007 }, 00:13:27.007 { 00:13:27.007 "name": "BaseBdev2", 00:13:27.007 "uuid": "eab57f1f-3e33-5e17-8876-616d7605bbe2", 00:13:27.007 "is_configured": true, 00:13:27.007 "data_offset": 0, 00:13:27.007 "data_size": 65536 00:13:27.007 } 00:13:27.007 ] 00:13:27.007 }' 00:13:27.007 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.007 [2024-11-26 17:58:08.847190] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:27.007 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=429 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.267 "name": "raid_bdev1", 00:13:27.267 "uuid": "38c63e55-f6d2-4b32-8320-b41a1b907292", 00:13:27.267 "strip_size_kb": 0, 00:13:27.267 "state": "online", 00:13:27.267 "raid_level": "raid1", 00:13:27.267 "superblock": false, 00:13:27.267 "num_base_bdevs": 2, 00:13:27.267 "num_base_bdevs_discovered": 2, 00:13:27.267 "num_base_bdevs_operational": 2, 00:13:27.267 "process": { 00:13:27.267 "type": "rebuild", 00:13:27.267 "target": "spare", 00:13:27.267 "progress": { 00:13:27.267 "blocks": 16384, 00:13:27.267 "percent": 25 00:13:27.267 } 00:13:27.267 }, 00:13:27.267 "base_bdevs_list": [ 00:13:27.267 { 00:13:27.267 "name": "spare", 00:13:27.267 "uuid": "7bc00952-a834-5e45-b7be-8f484dbe2757", 00:13:27.267 "is_configured": true, 00:13:27.267 "data_offset": 0, 00:13:27.267 "data_size": 65536 00:13:27.267 }, 00:13:27.267 { 00:13:27.267 "name": "BaseBdev2", 00:13:27.267 "uuid": "eab57f1f-3e33-5e17-8876-616d7605bbe2", 00:13:27.267 "is_configured": true, 00:13:27.267 "data_offset": 0, 00:13:27.267 "data_size": 65536 00:13:27.267 } 00:13:27.267 ] 00:13:27.267 }' 00:13:27.267 17:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.267 17:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.267 17:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.267 17:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.267 17:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.525 [2024-11-26 17:58:09.182411] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:27.785 125.25 IOPS, 375.75 MiB/s [2024-11-26T17:58:09.648Z] [2024-11-26 17:58:09.402107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:27.785 [2024-11-26 17:58:09.402593] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:28.353 17:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.353 17:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.353 17:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.353 17:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.353 17:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.353 17:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.353 17:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.353 17:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.353 17:58:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.353 17:58:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.353 17:58:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.353 17:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.353 "name": "raid_bdev1", 00:13:28.353 "uuid": "38c63e55-f6d2-4b32-8320-b41a1b907292", 00:13:28.353 "strip_size_kb": 0, 00:13:28.353 "state": "online", 00:13:28.353 "raid_level": "raid1", 00:13:28.353 "superblock": false, 00:13:28.353 "num_base_bdevs": 2, 00:13:28.353 "num_base_bdevs_discovered": 2, 00:13:28.353 "num_base_bdevs_operational": 2, 00:13:28.353 "process": { 00:13:28.353 "type": "rebuild", 00:13:28.353 "target": "spare", 00:13:28.353 "progress": { 00:13:28.353 "blocks": 30720, 00:13:28.353 "percent": 46 00:13:28.353 } 00:13:28.353 }, 00:13:28.353 "base_bdevs_list": [ 00:13:28.353 { 00:13:28.353 "name": "spare", 00:13:28.353 "uuid": "7bc00952-a834-5e45-b7be-8f484dbe2757", 00:13:28.353 "is_configured": true, 00:13:28.353 "data_offset": 0, 00:13:28.353 "data_size": 65536 00:13:28.353 }, 00:13:28.353 { 00:13:28.353 "name": "BaseBdev2", 00:13:28.353 "uuid": "eab57f1f-3e33-5e17-8876-616d7605bbe2", 00:13:28.353 "is_configured": true, 00:13:28.353 "data_offset": 0, 00:13:28.353 "data_size": 65536 00:13:28.353 } 00:13:28.353 ] 00:13:28.353 }' 00:13:28.353 17:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.353 17:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.353 17:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.353 [2024-11-26 17:58:10.195240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:28.611 17:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.611 17:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:29.546 112.40 IOPS, 337.20 MiB/s [2024-11-26T17:58:11.409Z] 17:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:29.546 17:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.547 17:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.547 17:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.547 17:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.547 17:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.547 17:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.547 17:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.547 17:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.547 17:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.547 17:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.547 17:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.547 "name": "raid_bdev1", 00:13:29.547 "uuid": "38c63e55-f6d2-4b32-8320-b41a1b907292", 00:13:29.547 "strip_size_kb": 0, 00:13:29.547 "state": "online", 00:13:29.547 "raid_level": "raid1", 00:13:29.547 "superblock": false, 00:13:29.547 "num_base_bdevs": 2, 00:13:29.547 "num_base_bdevs_discovered": 2, 00:13:29.547 "num_base_bdevs_operational": 2, 00:13:29.547 "process": { 00:13:29.547 "type": "rebuild", 00:13:29.547 "target": "spare", 00:13:29.547 "progress": { 00:13:29.547 "blocks": 51200, 00:13:29.547 "percent": 78 00:13:29.547 } 00:13:29.547 }, 00:13:29.547 "base_bdevs_list": [ 00:13:29.547 { 00:13:29.547 "name": "spare", 00:13:29.547 "uuid": "7bc00952-a834-5e45-b7be-8f484dbe2757", 00:13:29.547 "is_configured": true, 00:13:29.547 "data_offset": 0, 00:13:29.547 "data_size": 65536 00:13:29.547 }, 00:13:29.547 { 00:13:29.547 "name": "BaseBdev2", 00:13:29.547 "uuid": "eab57f1f-3e33-5e17-8876-616d7605bbe2", 00:13:29.547 "is_configured": true, 00:13:29.547 "data_offset": 0, 00:13:29.547 "data_size": 65536 00:13:29.547 } 00:13:29.547 ] 00:13:29.547 }' 00:13:29.547 17:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.547 17:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.547 17:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.547 101.17 IOPS, 303.50 MiB/s [2024-11-26T17:58:11.410Z] 17:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.547 17:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:29.806 [2024-11-26 17:58:11.542857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:29.806 [2024-11-26 17:58:11.543573] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:30.063 [2024-11-26 17:58:11.762288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:30.632 [2024-11-26 17:58:12.211005] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:30.632 [2024-11-26 17:58:12.310788] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:30.632 [2024-11-26 17:58:12.320847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.632 92.00 IOPS, 276.00 MiB/s [2024-11-26T17:58:12.495Z] 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:30.632 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.632 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.632 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.632 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.632 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.632 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.632 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.632 17:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.632 17:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.632 17:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.632 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.632 "name": "raid_bdev1", 00:13:30.632 "uuid": "38c63e55-f6d2-4b32-8320-b41a1b907292", 00:13:30.632 "strip_size_kb": 0, 00:13:30.632 "state": "online", 00:13:30.632 "raid_level": "raid1", 00:13:30.632 "superblock": false, 00:13:30.632 "num_base_bdevs": 2, 00:13:30.632 "num_base_bdevs_discovered": 2, 00:13:30.632 "num_base_bdevs_operational": 2, 00:13:30.632 "base_bdevs_list": [ 00:13:30.632 { 00:13:30.632 "name": "spare", 00:13:30.632 "uuid": "7bc00952-a834-5e45-b7be-8f484dbe2757", 00:13:30.632 "is_configured": true, 00:13:30.632 "data_offset": 0, 00:13:30.632 "data_size": 65536 00:13:30.632 }, 00:13:30.632 { 00:13:30.632 "name": "BaseBdev2", 00:13:30.632 "uuid": "eab57f1f-3e33-5e17-8876-616d7605bbe2", 00:13:30.632 "is_configured": true, 00:13:30.632 "data_offset": 0, 00:13:30.632 "data_size": 65536 00:13:30.632 } 00:13:30.632 ] 00:13:30.632 }' 00:13:30.632 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.632 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:30.632 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.890 "name": "raid_bdev1", 00:13:30.890 "uuid": "38c63e55-f6d2-4b32-8320-b41a1b907292", 00:13:30.890 "strip_size_kb": 0, 00:13:30.890 "state": "online", 00:13:30.890 "raid_level": "raid1", 00:13:30.890 "superblock": false, 00:13:30.890 "num_base_bdevs": 2, 00:13:30.890 "num_base_bdevs_discovered": 2, 00:13:30.890 "num_base_bdevs_operational": 2, 00:13:30.890 "base_bdevs_list": [ 00:13:30.890 { 00:13:30.890 "name": "spare", 00:13:30.890 "uuid": "7bc00952-a834-5e45-b7be-8f484dbe2757", 00:13:30.890 "is_configured": true, 00:13:30.890 "data_offset": 0, 00:13:30.890 "data_size": 65536 00:13:30.890 }, 00:13:30.890 { 00:13:30.890 "name": "BaseBdev2", 00:13:30.890 "uuid": "eab57f1f-3e33-5e17-8876-616d7605bbe2", 00:13:30.890 "is_configured": true, 00:13:30.890 "data_offset": 0, 00:13:30.890 "data_size": 65536 00:13:30.890 } 00:13:30.890 ] 00:13:30.890 }' 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.890 "name": "raid_bdev1", 00:13:30.890 "uuid": "38c63e55-f6d2-4b32-8320-b41a1b907292", 00:13:30.890 "strip_size_kb": 0, 00:13:30.890 "state": "online", 00:13:30.890 "raid_level": "raid1", 00:13:30.890 "superblock": false, 00:13:30.890 "num_base_bdevs": 2, 00:13:30.890 "num_base_bdevs_discovered": 2, 00:13:30.890 "num_base_bdevs_operational": 2, 00:13:30.890 "base_bdevs_list": [ 00:13:30.890 { 00:13:30.890 "name": "spare", 00:13:30.890 "uuid": "7bc00952-a834-5e45-b7be-8f484dbe2757", 00:13:30.890 "is_configured": true, 00:13:30.890 "data_offset": 0, 00:13:30.890 "data_size": 65536 00:13:30.890 }, 00:13:30.890 { 00:13:30.890 "name": "BaseBdev2", 00:13:30.890 "uuid": "eab57f1f-3e33-5e17-8876-616d7605bbe2", 00:13:30.890 "is_configured": true, 00:13:30.890 "data_offset": 0, 00:13:30.890 "data_size": 65536 00:13:30.890 } 00:13:30.890 ] 00:13:30.890 }' 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.890 17:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.148 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:31.148 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.148 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.436 [2024-11-26 17:58:13.017743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:31.436 [2024-11-26 17:58:13.017791] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.436 00:13:31.436 Latency(us) 00:13:31.436 [2024-11-26T17:58:13.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.436 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:31.436 raid_bdev1 : 7.77 85.67 257.00 0.00 0.00 16298.28 348.79 118136.51 00:13:31.436 [2024-11-26T17:58:13.299Z] =================================================================================================================== 00:13:31.436 [2024-11-26T17:58:13.299Z] Total : 85.67 257.00 0.00 0.00 16298.28 348.79 118136.51 00:13:31.436 { 00:13:31.436 "results": [ 00:13:31.436 { 00:13:31.436 "job": "raid_bdev1", 00:13:31.436 "core_mask": "0x1", 00:13:31.436 "workload": "randrw", 00:13:31.436 "percentage": 50, 00:13:31.436 "status": "finished", 00:13:31.436 "queue_depth": 2, 00:13:31.436 "io_size": 3145728, 00:13:31.436 "runtime": 7.77417, 00:13:31.436 "iops": 85.66830928575011, 00:13:31.436 "mibps": 257.0049278572503, 00:13:31.436 "io_failed": 0, 00:13:31.436 "io_timeout": 0, 00:13:31.436 "avg_latency_us": 16298.284632230485, 00:13:31.436 "min_latency_us": 348.7860262008734, 00:13:31.436 "max_latency_us": 118136.51004366812 00:13:31.436 } 00:13:31.436 ], 00:13:31.436 "core_count": 1 00:13:31.436 } 00:13:31.436 [2024-11-26 17:58:13.143178] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.436 [2024-11-26 17:58:13.143271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.436 [2024-11-26 17:58:13.143366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.436 [2024-11-26 17:58:13.143381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.436 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:31.702 /dev/nbd0 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.702 1+0 records in 00:13:31.702 1+0 records out 00:13:31.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404575 s, 10.1 MB/s 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.702 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:31.960 /dev/nbd1 00:13:31.960 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:31.960 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:31.960 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:31.960 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:31.960 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:31.960 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:31.960 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:31.960 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:31.960 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:31.960 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:31.960 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.960 1+0 records in 00:13:31.960 1+0 records out 00:13:31.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290904 s, 14.1 MB/s 00:13:31.961 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.961 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:31.961 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.961 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:31.961 17:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:31.961 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.961 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.961 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:32.218 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:32.218 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.218 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:32.218 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.218 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:32.218 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.218 17:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:32.476 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:32.476 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:32.476 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:32.476 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.476 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.476 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:32.476 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:32.476 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.476 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:32.476 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.476 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:32.476 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.476 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:32.476 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.476 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76817 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76817 ']' 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76817 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76817 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.735 killing process with pid 76817 00:13:32.735 Received shutdown signal, test time was about 9.134534 seconds 00:13:32.735 00:13:32.735 Latency(us) 00:13:32.735 [2024-11-26T17:58:14.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.735 [2024-11-26T17:58:14.598Z] =================================================================================================================== 00:13:32.735 [2024-11-26T17:58:14.598Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76817' 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76817 00:13:32.735 [2024-11-26 17:58:14.475221] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.735 17:58:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76817 00:13:32.994 [2024-11-26 17:58:14.721369] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:34.371 17:58:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:34.371 ************************************ 00:13:34.371 END TEST raid_rebuild_test_io 00:13:34.371 ************************************ 00:13:34.371 00:13:34.371 real 0m12.532s 00:13:34.371 user 0m15.861s 00:13:34.371 sys 0m1.458s 00:13:34.371 17:58:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.371 17:58:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.371 17:58:16 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:34.371 17:58:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:34.371 17:58:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.371 17:58:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.371 ************************************ 00:13:34.371 START TEST raid_rebuild_test_sb_io 00:13:34.371 ************************************ 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77193 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77193 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77193 ']' 00:13:34.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.372 17:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.372 [2024-11-26 17:58:16.191672] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:13:34.372 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:34.372 Zero copy mechanism will not be used. 00:13:34.372 [2024-11-26 17:58:16.191889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77193 ] 00:13:34.632 [2024-11-26 17:58:16.373546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.985 [2024-11-26 17:58:16.504954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.985 [2024-11-26 17:58:16.732667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.985 [2024-11-26 17:58:16.732742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.267 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.267 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:35.267 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.267 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:35.267 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.267 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.267 BaseBdev1_malloc 00:13:35.267 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.267 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:35.267 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.267 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.267 [2024-11-26 17:58:17.113459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:35.267 [2024-11-26 17:58:17.113523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.267 [2024-11-26 17:58:17.113563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:35.267 [2024-11-26 17:58:17.113576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.267 [2024-11-26 17:58:17.115802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.267 [2024-11-26 17:58:17.115845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:35.268 BaseBdev1 00:13:35.268 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.268 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.268 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:35.268 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.268 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.527 BaseBdev2_malloc 00:13:35.527 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.527 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:35.527 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.527 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.527 [2024-11-26 17:58:17.175199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:35.527 [2024-11-26 17:58:17.175333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.527 [2024-11-26 17:58:17.175367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:35.528 [2024-11-26 17:58:17.175381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.528 [2024-11-26 17:58:17.177857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.528 [2024-11-26 17:58:17.177903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:35.528 BaseBdev2 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.528 spare_malloc 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.528 spare_delay 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.528 [2024-11-26 17:58:17.260142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:35.528 [2024-11-26 17:58:17.260277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.528 [2024-11-26 17:58:17.260312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:35.528 [2024-11-26 17:58:17.260325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.528 [2024-11-26 17:58:17.262849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.528 [2024-11-26 17:58:17.262896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:35.528 spare 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.528 [2024-11-26 17:58:17.272227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.528 [2024-11-26 17:58:17.274369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.528 [2024-11-26 17:58:17.274577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:35.528 [2024-11-26 17:58:17.274596] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:35.528 [2024-11-26 17:58:17.274908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:35.528 [2024-11-26 17:58:17.275133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:35.528 [2024-11-26 17:58:17.275214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:35.528 [2024-11-26 17:58:17.275438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.528 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.528 "name": "raid_bdev1", 00:13:35.528 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:35.528 "strip_size_kb": 0, 00:13:35.528 "state": "online", 00:13:35.528 "raid_level": "raid1", 00:13:35.528 "superblock": true, 00:13:35.528 "num_base_bdevs": 2, 00:13:35.528 "num_base_bdevs_discovered": 2, 00:13:35.528 "num_base_bdevs_operational": 2, 00:13:35.528 "base_bdevs_list": [ 00:13:35.528 { 00:13:35.528 "name": "BaseBdev1", 00:13:35.528 "uuid": "ba6e50e8-fadb-512c-9380-10e11a8e55ad", 00:13:35.528 "is_configured": true, 00:13:35.528 "data_offset": 2048, 00:13:35.528 "data_size": 63488 00:13:35.528 }, 00:13:35.528 { 00:13:35.528 "name": "BaseBdev2", 00:13:35.528 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:35.528 "is_configured": true, 00:13:35.528 "data_offset": 2048, 00:13:35.528 "data_size": 63488 00:13:35.528 } 00:13:35.528 ] 00:13:35.529 }' 00:13:35.529 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.529 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:36.097 [2024-11-26 17:58:17.731684] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.097 [2024-11-26 17:58:17.831212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.097 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.097 "name": "raid_bdev1", 00:13:36.097 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:36.097 "strip_size_kb": 0, 00:13:36.097 "state": "online", 00:13:36.097 "raid_level": "raid1", 00:13:36.098 "superblock": true, 00:13:36.098 "num_base_bdevs": 2, 00:13:36.098 "num_base_bdevs_discovered": 1, 00:13:36.098 "num_base_bdevs_operational": 1, 00:13:36.098 "base_bdevs_list": [ 00:13:36.098 { 00:13:36.098 "name": null, 00:13:36.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.098 "is_configured": false, 00:13:36.098 "data_offset": 0, 00:13:36.098 "data_size": 63488 00:13:36.098 }, 00:13:36.098 { 00:13:36.098 "name": "BaseBdev2", 00:13:36.098 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:36.098 "is_configured": true, 00:13:36.098 "data_offset": 2048, 00:13:36.098 "data_size": 63488 00:13:36.098 } 00:13:36.098 ] 00:13:36.098 }' 00:13:36.098 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.098 17:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.356 [2024-11-26 17:58:17.964592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:36.356 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:36.356 Zero copy mechanism will not be used. 00:13:36.356 Running I/O for 60 seconds... 00:13:36.615 17:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:36.615 17:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.615 17:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.615 [2024-11-26 17:58:18.344089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.615 17:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.615 17:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:36.615 [2024-11-26 17:58:18.411712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:36.615 [2024-11-26 17:58:18.413988] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.875 [2024-11-26 17:58:18.524384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.875 [2024-11-26 17:58:18.525041] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.875 [2024-11-26 17:58:18.658646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.875 [2024-11-26 17:58:18.659026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:37.133 [2024-11-26 17:58:18.923671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:37.390 139.00 IOPS, 417.00 MiB/s [2024-11-26T17:58:19.253Z] [2024-11-26 17:58:19.141652] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:37.647 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.647 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.647 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.647 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.647 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.647 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.647 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.647 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.647 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.647 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.647 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.647 "name": "raid_bdev1", 00:13:37.647 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:37.647 "strip_size_kb": 0, 00:13:37.647 "state": "online", 00:13:37.647 "raid_level": "raid1", 00:13:37.647 "superblock": true, 00:13:37.647 "num_base_bdevs": 2, 00:13:37.647 "num_base_bdevs_discovered": 2, 00:13:37.647 "num_base_bdevs_operational": 2, 00:13:37.647 "process": { 00:13:37.647 "type": "rebuild", 00:13:37.647 "target": "spare", 00:13:37.647 "progress": { 00:13:37.647 "blocks": 12288, 00:13:37.647 "percent": 19 00:13:37.647 } 00:13:37.647 }, 00:13:37.647 "base_bdevs_list": [ 00:13:37.647 { 00:13:37.647 "name": "spare", 00:13:37.647 "uuid": "87b54ed6-a62a-545c-bef5-59b6e05e9488", 00:13:37.647 "is_configured": true, 00:13:37.647 "data_offset": 2048, 00:13:37.647 "data_size": 63488 00:13:37.647 }, 00:13:37.647 { 00:13:37.647 "name": "BaseBdev2", 00:13:37.647 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:37.647 "is_configured": true, 00:13:37.647 "data_offset": 2048, 00:13:37.647 "data_size": 63488 00:13:37.647 } 00:13:37.647 ] 00:13:37.647 }' 00:13:37.647 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.647 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.647 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.905 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.905 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:37.905 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.905 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.905 [2024-11-26 17:58:19.532090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.905 [2024-11-26 17:58:19.592638] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:37.905 [2024-11-26 17:58:19.707954] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:37.906 [2024-11-26 17:58:19.717687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.906 [2024-11-26 17:58:19.717756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.906 [2024-11-26 17:58:19.717772] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:37.906 [2024-11-26 17:58:19.761538] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.165 "name": "raid_bdev1", 00:13:38.165 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:38.165 "strip_size_kb": 0, 00:13:38.165 "state": "online", 00:13:38.165 "raid_level": "raid1", 00:13:38.165 "superblock": true, 00:13:38.165 "num_base_bdevs": 2, 00:13:38.165 "num_base_bdevs_discovered": 1, 00:13:38.165 "num_base_bdevs_operational": 1, 00:13:38.165 "base_bdevs_list": [ 00:13:38.165 { 00:13:38.165 "name": null, 00:13:38.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.165 "is_configured": false, 00:13:38.165 "data_offset": 0, 00:13:38.165 "data_size": 63488 00:13:38.165 }, 00:13:38.165 { 00:13:38.165 "name": "BaseBdev2", 00:13:38.165 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:38.165 "is_configured": true, 00:13:38.165 "data_offset": 2048, 00:13:38.165 "data_size": 63488 00:13:38.165 } 00:13:38.165 ] 00:13:38.165 }' 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.165 17:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.433 127.50 IOPS, 382.50 MiB/s [2024-11-26T17:58:20.296Z] 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:38.433 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.433 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:38.433 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:38.433 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.433 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.433 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.433 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.433 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.433 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.433 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.433 "name": "raid_bdev1", 00:13:38.433 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:38.433 "strip_size_kb": 0, 00:13:38.433 "state": "online", 00:13:38.433 "raid_level": "raid1", 00:13:38.433 "superblock": true, 00:13:38.433 "num_base_bdevs": 2, 00:13:38.433 "num_base_bdevs_discovered": 1, 00:13:38.433 "num_base_bdevs_operational": 1, 00:13:38.433 "base_bdevs_list": [ 00:13:38.433 { 00:13:38.433 "name": null, 00:13:38.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.433 "is_configured": false, 00:13:38.433 "data_offset": 0, 00:13:38.433 "data_size": 63488 00:13:38.433 }, 00:13:38.433 { 00:13:38.433 "name": "BaseBdev2", 00:13:38.433 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:38.433 "is_configured": true, 00:13:38.433 "data_offset": 2048, 00:13:38.433 "data_size": 63488 00:13:38.433 } 00:13:38.433 ] 00:13:38.433 }' 00:13:38.433 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.692 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:38.692 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.692 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:38.692 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:38.692 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.692 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.692 [2024-11-26 17:58:20.381978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.692 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.692 17:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:38.692 [2024-11-26 17:58:20.461873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:38.692 [2024-11-26 17:58:20.464130] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.951 [2024-11-26 17:58:20.588158] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:38.951 [2024-11-26 17:58:20.588812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:38.951 [2024-11-26 17:58:20.799918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:38.951 [2024-11-26 17:58:20.800322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:39.472 136.00 IOPS, 408.00 MiB/s [2024-11-26T17:58:21.335Z] [2024-11-26 17:58:21.129964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:39.472 [2024-11-26 17:58:21.130630] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:39.472 [2024-11-26 17:58:21.255267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:39.472 [2024-11-26 17:58:21.255650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.737 "name": "raid_bdev1", 00:13:39.737 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:39.737 "strip_size_kb": 0, 00:13:39.737 "state": "online", 00:13:39.737 "raid_level": "raid1", 00:13:39.737 "superblock": true, 00:13:39.737 "num_base_bdevs": 2, 00:13:39.737 "num_base_bdevs_discovered": 2, 00:13:39.737 "num_base_bdevs_operational": 2, 00:13:39.737 "process": { 00:13:39.737 "type": "rebuild", 00:13:39.737 "target": "spare", 00:13:39.737 "progress": { 00:13:39.737 "blocks": 10240, 00:13:39.737 "percent": 16 00:13:39.737 } 00:13:39.737 }, 00:13:39.737 "base_bdevs_list": [ 00:13:39.737 { 00:13:39.737 "name": "spare", 00:13:39.737 "uuid": "87b54ed6-a62a-545c-bef5-59b6e05e9488", 00:13:39.737 "is_configured": true, 00:13:39.737 "data_offset": 2048, 00:13:39.737 "data_size": 63488 00:13:39.737 }, 00:13:39.737 { 00:13:39.737 "name": "BaseBdev2", 00:13:39.737 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:39.737 "is_configured": true, 00:13:39.737 "data_offset": 2048, 00:13:39.737 "data_size": 63488 00:13:39.737 } 00:13:39.737 ] 00:13:39.737 }' 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:39.737 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=442 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.737 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.998 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.998 [2024-11-26 17:58:21.619222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:39.998 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.998 "name": "raid_bdev1", 00:13:39.998 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:39.998 "strip_size_kb": 0, 00:13:39.998 "state": "online", 00:13:39.998 "raid_level": "raid1", 00:13:39.998 "superblock": true, 00:13:39.998 "num_base_bdevs": 2, 00:13:39.998 "num_base_bdevs_discovered": 2, 00:13:39.998 "num_base_bdevs_operational": 2, 00:13:39.998 "process": { 00:13:39.998 "type": "rebuild", 00:13:39.998 "target": "spare", 00:13:39.998 "progress": { 00:13:39.998 "blocks": 12288, 00:13:39.998 "percent": 19 00:13:39.998 } 00:13:39.998 }, 00:13:39.998 "base_bdevs_list": [ 00:13:39.998 { 00:13:39.998 "name": "spare", 00:13:39.998 "uuid": "87b54ed6-a62a-545c-bef5-59b6e05e9488", 00:13:39.998 "is_configured": true, 00:13:39.998 "data_offset": 2048, 00:13:39.998 "data_size": 63488 00:13:39.998 }, 00:13:39.998 { 00:13:39.998 "name": "BaseBdev2", 00:13:39.998 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:39.998 "is_configured": true, 00:13:39.998 "data_offset": 2048, 00:13:39.998 "data_size": 63488 00:13:39.998 } 00:13:39.998 ] 00:13:39.998 }' 00:13:39.998 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.998 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.998 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.998 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.998 17:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.998 [2024-11-26 17:58:21.744821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:40.519 118.50 IOPS, 355.50 MiB/s [2024-11-26T17:58:22.382Z] [2024-11-26 17:58:22.342165] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:40.519 [2024-11-26 17:58:22.342810] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:40.779 [2024-11-26 17:58:22.546388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.039 "name": "raid_bdev1", 00:13:41.039 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:41.039 "strip_size_kb": 0, 00:13:41.039 "state": "online", 00:13:41.039 "raid_level": "raid1", 00:13:41.039 "superblock": true, 00:13:41.039 "num_base_bdevs": 2, 00:13:41.039 "num_base_bdevs_discovered": 2, 00:13:41.039 "num_base_bdevs_operational": 2, 00:13:41.039 "process": { 00:13:41.039 "type": "rebuild", 00:13:41.039 "target": "spare", 00:13:41.039 "progress": { 00:13:41.039 "blocks": 28672, 00:13:41.039 "percent": 45 00:13:41.039 } 00:13:41.039 }, 00:13:41.039 "base_bdevs_list": [ 00:13:41.039 { 00:13:41.039 "name": "spare", 00:13:41.039 "uuid": "87b54ed6-a62a-545c-bef5-59b6e05e9488", 00:13:41.039 "is_configured": true, 00:13:41.039 "data_offset": 2048, 00:13:41.039 "data_size": 63488 00:13:41.039 }, 00:13:41.039 { 00:13:41.039 "name": "BaseBdev2", 00:13:41.039 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:41.039 "is_configured": true, 00:13:41.039 "data_offset": 2048, 00:13:41.039 "data_size": 63488 00:13:41.039 } 00:13:41.039 ] 00:13:41.039 }' 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.039 17:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.557 105.60 IOPS, 316.80 MiB/s [2024-11-26T17:58:23.421Z] [2024-11-26 17:58:23.187449] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:42.154 17:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.154 17:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.154 17:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.154 17:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.154 17:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.154 17:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.154 17:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.154 17:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.154 17:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.154 17:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.154 17:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.154 17:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.154 "name": "raid_bdev1", 00:13:42.154 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:42.154 "strip_size_kb": 0, 00:13:42.154 "state": "online", 00:13:42.154 "raid_level": "raid1", 00:13:42.154 "superblock": true, 00:13:42.154 "num_base_bdevs": 2, 00:13:42.154 "num_base_bdevs_discovered": 2, 00:13:42.154 "num_base_bdevs_operational": 2, 00:13:42.154 "process": { 00:13:42.154 "type": "rebuild", 00:13:42.154 "target": "spare", 00:13:42.154 "progress": { 00:13:42.154 "blocks": 49152, 00:13:42.154 "percent": 77 00:13:42.154 } 00:13:42.154 }, 00:13:42.154 "base_bdevs_list": [ 00:13:42.154 { 00:13:42.154 "name": "spare", 00:13:42.154 "uuid": "87b54ed6-a62a-545c-bef5-59b6e05e9488", 00:13:42.154 "is_configured": true, 00:13:42.154 "data_offset": 2048, 00:13:42.154 "data_size": 63488 00:13:42.154 }, 00:13:42.154 { 00:13:42.154 "name": "BaseBdev2", 00:13:42.154 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:42.154 "is_configured": true, 00:13:42.154 "data_offset": 2048, 00:13:42.154 "data_size": 63488 00:13:42.154 } 00:13:42.154 ] 00:13:42.154 }' 00:13:42.154 17:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.155 17:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.155 17:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.155 94.17 IOPS, 282.50 MiB/s [2024-11-26T17:58:24.018Z] 17:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.155 17:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:43.101 [2024-11-26 17:58:24.616568] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:43.101 [2024-11-26 17:58:24.722875] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:43.101 [2024-11-26 17:58:24.726243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.361 84.71 IOPS, 254.14 MiB/s [2024-11-26T17:58:25.224Z] 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.361 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.361 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.361 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.361 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.361 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.361 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.361 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.361 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.361 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.361 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.361 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.361 "name": "raid_bdev1", 00:13:43.361 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:43.361 "strip_size_kb": 0, 00:13:43.361 "state": "online", 00:13:43.361 "raid_level": "raid1", 00:13:43.361 "superblock": true, 00:13:43.361 "num_base_bdevs": 2, 00:13:43.361 "num_base_bdevs_discovered": 2, 00:13:43.361 "num_base_bdevs_operational": 2, 00:13:43.361 "base_bdevs_list": [ 00:13:43.361 { 00:13:43.361 "name": "spare", 00:13:43.361 "uuid": "87b54ed6-a62a-545c-bef5-59b6e05e9488", 00:13:43.361 "is_configured": true, 00:13:43.361 "data_offset": 2048, 00:13:43.361 "data_size": 63488 00:13:43.361 }, 00:13:43.361 { 00:13:43.361 "name": "BaseBdev2", 00:13:43.361 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:43.361 "is_configured": true, 00:13:43.361 "data_offset": 2048, 00:13:43.361 "data_size": 63488 00:13:43.361 } 00:13:43.361 ] 00:13:43.361 }' 00:13:43.361 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.361 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:43.361 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.362 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:43.362 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:43.362 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.362 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.362 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.362 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.362 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.362 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.362 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.362 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.362 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.362 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.362 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.362 "name": "raid_bdev1", 00:13:43.362 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:43.362 "strip_size_kb": 0, 00:13:43.362 "state": "online", 00:13:43.362 "raid_level": "raid1", 00:13:43.362 "superblock": true, 00:13:43.362 "num_base_bdevs": 2, 00:13:43.362 "num_base_bdevs_discovered": 2, 00:13:43.362 "num_base_bdevs_operational": 2, 00:13:43.362 "base_bdevs_list": [ 00:13:43.362 { 00:13:43.362 "name": "spare", 00:13:43.362 "uuid": "87b54ed6-a62a-545c-bef5-59b6e05e9488", 00:13:43.362 "is_configured": true, 00:13:43.362 "data_offset": 2048, 00:13:43.362 "data_size": 63488 00:13:43.362 }, 00:13:43.362 { 00:13:43.362 "name": "BaseBdev2", 00:13:43.362 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:43.362 "is_configured": true, 00:13:43.362 "data_offset": 2048, 00:13:43.362 "data_size": 63488 00:13:43.362 } 00:13:43.362 ] 00:13:43.362 }' 00:13:43.362 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.621 "name": "raid_bdev1", 00:13:43.621 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:43.621 "strip_size_kb": 0, 00:13:43.621 "state": "online", 00:13:43.621 "raid_level": "raid1", 00:13:43.621 "superblock": true, 00:13:43.621 "num_base_bdevs": 2, 00:13:43.621 "num_base_bdevs_discovered": 2, 00:13:43.621 "num_base_bdevs_operational": 2, 00:13:43.621 "base_bdevs_list": [ 00:13:43.621 { 00:13:43.621 "name": "spare", 00:13:43.621 "uuid": "87b54ed6-a62a-545c-bef5-59b6e05e9488", 00:13:43.621 "is_configured": true, 00:13:43.621 "data_offset": 2048, 00:13:43.621 "data_size": 63488 00:13:43.621 }, 00:13:43.621 { 00:13:43.621 "name": "BaseBdev2", 00:13:43.621 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:43.621 "is_configured": true, 00:13:43.621 "data_offset": 2048, 00:13:43.621 "data_size": 63488 00:13:43.621 } 00:13:43.621 ] 00:13:43.621 }' 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.621 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.189 [2024-11-26 17:58:25.758554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:44.189 [2024-11-26 17:58:25.758594] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.189 00:13:44.189 Latency(us) 00:13:44.189 [2024-11-26T17:58:26.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.189 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:44.189 raid_bdev1 : 7.85 79.45 238.34 0.00 0.00 17344.55 350.57 115389.15 00:13:44.189 [2024-11-26T17:58:26.052Z] =================================================================================================================== 00:13:44.189 [2024-11-26T17:58:26.052Z] Total : 79.45 238.34 0.00 0.00 17344.55 350.57 115389.15 00:13:44.189 { 00:13:44.189 "results": [ 00:13:44.189 { 00:13:44.189 "job": "raid_bdev1", 00:13:44.189 "core_mask": "0x1", 00:13:44.189 "workload": "randrw", 00:13:44.189 "percentage": 50, 00:13:44.189 "status": "finished", 00:13:44.189 "queue_depth": 2, 00:13:44.189 "io_size": 3145728, 00:13:44.189 "runtime": 7.854487, 00:13:44.189 "iops": 79.44503568469844, 00:13:44.189 "mibps": 238.33510705409532, 00:13:44.189 "io_failed": 0, 00:13:44.189 "io_timeout": 0, 00:13:44.189 "avg_latency_us": 17344.550352704064, 00:13:44.189 "min_latency_us": 350.57467248908296, 00:13:44.189 "max_latency_us": 115389.14934497817 00:13:44.189 } 00:13:44.189 ], 00:13:44.189 "core_count": 1 00:13:44.189 } 00:13:44.189 [2024-11-26 17:58:25.831391] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.189 [2024-11-26 17:58:25.831482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.189 [2024-11-26 17:58:25.831573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.189 [2024-11-26 17:58:25.831590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.189 17:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:44.449 /dev/nbd0 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.449 1+0 records in 00:13:44.449 1+0 records out 00:13:44.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360589 s, 11.4 MB/s 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.449 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:44.708 /dev/nbd1 00:13:44.708 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:44.708 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:44.708 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:44.708 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:44.709 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.709 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.709 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:44.709 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:44.709 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.709 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.709 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.709 1+0 records in 00:13:44.709 1+0 records out 00:13:44.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462011 s, 8.9 MB/s 00:13:44.709 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.709 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:44.709 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.709 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.709 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:44.709 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.709 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.709 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:44.967 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:44.968 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.968 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:44.968 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.968 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:44.968 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.968 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:45.226 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:45.226 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:45.226 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:45.226 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.226 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.226 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:45.226 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:45.226 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.226 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:45.226 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:45.226 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:45.226 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:45.226 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:45.226 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.226 17:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.486 [2024-11-26 17:58:27.193598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.486 [2024-11-26 17:58:27.193674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.486 [2024-11-26 17:58:27.193703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:45.486 [2024-11-26 17:58:27.193716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.486 [2024-11-26 17:58:27.196356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.486 [2024-11-26 17:58:27.196401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.486 [2024-11-26 17:58:27.196511] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:45.486 [2024-11-26 17:58:27.196580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.486 [2024-11-26 17:58:27.196743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.486 spare 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.486 [2024-11-26 17:58:27.296689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:45.486 [2024-11-26 17:58:27.296747] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:45.486 [2024-11-26 17:58:27.297164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:45.486 [2024-11-26 17:58:27.297420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:45.486 [2024-11-26 17:58:27.297448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:45.486 [2024-11-26 17:58:27.297677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.486 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.746 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.746 "name": "raid_bdev1", 00:13:45.746 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:45.746 "strip_size_kb": 0, 00:13:45.746 "state": "online", 00:13:45.746 "raid_level": "raid1", 00:13:45.746 "superblock": true, 00:13:45.746 "num_base_bdevs": 2, 00:13:45.746 "num_base_bdevs_discovered": 2, 00:13:45.746 "num_base_bdevs_operational": 2, 00:13:45.746 "base_bdevs_list": [ 00:13:45.746 { 00:13:45.746 "name": "spare", 00:13:45.746 "uuid": "87b54ed6-a62a-545c-bef5-59b6e05e9488", 00:13:45.746 "is_configured": true, 00:13:45.746 "data_offset": 2048, 00:13:45.746 "data_size": 63488 00:13:45.746 }, 00:13:45.746 { 00:13:45.746 "name": "BaseBdev2", 00:13:45.746 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:45.746 "is_configured": true, 00:13:45.746 "data_offset": 2048, 00:13:45.746 "data_size": 63488 00:13:45.746 } 00:13:45.746 ] 00:13:45.746 }' 00:13:45.746 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.746 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.006 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:46.006 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.006 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:46.006 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:46.006 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.006 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.006 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.006 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.006 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.006 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.006 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.006 "name": "raid_bdev1", 00:13:46.006 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:46.006 "strip_size_kb": 0, 00:13:46.006 "state": "online", 00:13:46.007 "raid_level": "raid1", 00:13:46.007 "superblock": true, 00:13:46.007 "num_base_bdevs": 2, 00:13:46.007 "num_base_bdevs_discovered": 2, 00:13:46.007 "num_base_bdevs_operational": 2, 00:13:46.007 "base_bdevs_list": [ 00:13:46.007 { 00:13:46.007 "name": "spare", 00:13:46.007 "uuid": "87b54ed6-a62a-545c-bef5-59b6e05e9488", 00:13:46.007 "is_configured": true, 00:13:46.007 "data_offset": 2048, 00:13:46.007 "data_size": 63488 00:13:46.007 }, 00:13:46.007 { 00:13:46.007 "name": "BaseBdev2", 00:13:46.007 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:46.007 "is_configured": true, 00:13:46.007 "data_offset": 2048, 00:13:46.007 "data_size": 63488 00:13:46.007 } 00:13:46.007 ] 00:13:46.007 }' 00:13:46.007 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.267 [2024-11-26 17:58:27.984746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.267 17:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.267 17:58:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.267 17:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.267 "name": "raid_bdev1", 00:13:46.268 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:46.268 "strip_size_kb": 0, 00:13:46.268 "state": "online", 00:13:46.268 "raid_level": "raid1", 00:13:46.268 "superblock": true, 00:13:46.268 "num_base_bdevs": 2, 00:13:46.268 "num_base_bdevs_discovered": 1, 00:13:46.268 "num_base_bdevs_operational": 1, 00:13:46.268 "base_bdevs_list": [ 00:13:46.268 { 00:13:46.268 "name": null, 00:13:46.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.268 "is_configured": false, 00:13:46.268 "data_offset": 0, 00:13:46.268 "data_size": 63488 00:13:46.268 }, 00:13:46.268 { 00:13:46.268 "name": "BaseBdev2", 00:13:46.268 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:46.268 "is_configured": true, 00:13:46.268 "data_offset": 2048, 00:13:46.268 "data_size": 63488 00:13:46.268 } 00:13:46.268 ] 00:13:46.268 }' 00:13:46.268 17:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.268 17:58:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.838 17:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:46.838 17:58:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.838 17:58:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.838 [2024-11-26 17:58:28.495958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.838 [2024-11-26 17:58:28.496197] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:46.838 [2024-11-26 17:58:28.496219] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:46.838 [2024-11-26 17:58:28.496268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.838 [2024-11-26 17:58:28.513976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:46.838 17:58:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.838 17:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:46.838 [2024-11-26 17:58:28.515932] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:47.775 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.775 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.775 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.775 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.775 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.775 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.775 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.775 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.775 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.775 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.775 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.775 "name": "raid_bdev1", 00:13:47.775 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:47.775 "strip_size_kb": 0, 00:13:47.775 "state": "online", 00:13:47.776 "raid_level": "raid1", 00:13:47.776 "superblock": true, 00:13:47.776 "num_base_bdevs": 2, 00:13:47.776 "num_base_bdevs_discovered": 2, 00:13:47.776 "num_base_bdevs_operational": 2, 00:13:47.776 "process": { 00:13:47.776 "type": "rebuild", 00:13:47.776 "target": "spare", 00:13:47.776 "progress": { 00:13:47.776 "blocks": 20480, 00:13:47.776 "percent": 32 00:13:47.776 } 00:13:47.776 }, 00:13:47.776 "base_bdevs_list": [ 00:13:47.776 { 00:13:47.776 "name": "spare", 00:13:47.776 "uuid": "87b54ed6-a62a-545c-bef5-59b6e05e9488", 00:13:47.776 "is_configured": true, 00:13:47.776 "data_offset": 2048, 00:13:47.776 "data_size": 63488 00:13:47.776 }, 00:13:47.776 { 00:13:47.776 "name": "BaseBdev2", 00:13:47.776 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:47.776 "is_configured": true, 00:13:47.776 "data_offset": 2048, 00:13:47.776 "data_size": 63488 00:13:47.776 } 00:13:47.776 ] 00:13:47.776 }' 00:13:47.776 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.776 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.776 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.035 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.036 [2024-11-26 17:58:29.659950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.036 [2024-11-26 17:58:29.722289] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:48.036 [2024-11-26 17:58:29.722393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.036 [2024-11-26 17:58:29.722413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.036 [2024-11-26 17:58:29.722421] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.036 "name": "raid_bdev1", 00:13:48.036 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:48.036 "strip_size_kb": 0, 00:13:48.036 "state": "online", 00:13:48.036 "raid_level": "raid1", 00:13:48.036 "superblock": true, 00:13:48.036 "num_base_bdevs": 2, 00:13:48.036 "num_base_bdevs_discovered": 1, 00:13:48.036 "num_base_bdevs_operational": 1, 00:13:48.036 "base_bdevs_list": [ 00:13:48.036 { 00:13:48.036 "name": null, 00:13:48.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.036 "is_configured": false, 00:13:48.036 "data_offset": 0, 00:13:48.036 "data_size": 63488 00:13:48.036 }, 00:13:48.036 { 00:13:48.036 "name": "BaseBdev2", 00:13:48.036 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:48.036 "is_configured": true, 00:13:48.036 "data_offset": 2048, 00:13:48.036 "data_size": 63488 00:13:48.036 } 00:13:48.036 ] 00:13:48.036 }' 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.036 17:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.604 17:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:48.604 17:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.604 17:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.604 [2024-11-26 17:58:30.246391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:48.604 [2024-11-26 17:58:30.246474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.604 [2024-11-26 17:58:30.246504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:48.604 [2024-11-26 17:58:30.246516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.604 [2024-11-26 17:58:30.247104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.604 [2024-11-26 17:58:30.247134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:48.604 [2024-11-26 17:58:30.247249] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:48.604 [2024-11-26 17:58:30.247271] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:48.604 [2024-11-26 17:58:30.247284] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:48.604 [2024-11-26 17:58:30.247307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.604 [2024-11-26 17:58:30.265702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:48.604 spare 00:13:48.604 17:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.604 17:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:48.604 [2024-11-26 17:58:30.267923] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:49.543 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.543 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.543 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.543 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.543 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.543 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.543 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.543 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.543 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.543 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.543 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.543 "name": "raid_bdev1", 00:13:49.543 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:49.543 "strip_size_kb": 0, 00:13:49.543 "state": "online", 00:13:49.543 "raid_level": "raid1", 00:13:49.543 "superblock": true, 00:13:49.543 "num_base_bdevs": 2, 00:13:49.543 "num_base_bdevs_discovered": 2, 00:13:49.543 "num_base_bdevs_operational": 2, 00:13:49.543 "process": { 00:13:49.543 "type": "rebuild", 00:13:49.543 "target": "spare", 00:13:49.543 "progress": { 00:13:49.543 "blocks": 20480, 00:13:49.543 "percent": 32 00:13:49.543 } 00:13:49.543 }, 00:13:49.543 "base_bdevs_list": [ 00:13:49.543 { 00:13:49.543 "name": "spare", 00:13:49.543 "uuid": "87b54ed6-a62a-545c-bef5-59b6e05e9488", 00:13:49.543 "is_configured": true, 00:13:49.543 "data_offset": 2048, 00:13:49.543 "data_size": 63488 00:13:49.543 }, 00:13:49.543 { 00:13:49.543 "name": "BaseBdev2", 00:13:49.543 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:49.543 "is_configured": true, 00:13:49.543 "data_offset": 2048, 00:13:49.543 "data_size": 63488 00:13:49.543 } 00:13:49.543 ] 00:13:49.543 }' 00:13:49.543 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.543 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.543 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.802 [2024-11-26 17:58:31.419838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.802 [2024-11-26 17:58:31.474306] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:49.802 [2024-11-26 17:58:31.474401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.802 [2024-11-26 17:58:31.474419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.802 [2024-11-26 17:58:31.474430] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.802 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.802 "name": "raid_bdev1", 00:13:49.802 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:49.802 "strip_size_kb": 0, 00:13:49.802 "state": "online", 00:13:49.802 "raid_level": "raid1", 00:13:49.803 "superblock": true, 00:13:49.803 "num_base_bdevs": 2, 00:13:49.803 "num_base_bdevs_discovered": 1, 00:13:49.803 "num_base_bdevs_operational": 1, 00:13:49.803 "base_bdevs_list": [ 00:13:49.803 { 00:13:49.803 "name": null, 00:13:49.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.803 "is_configured": false, 00:13:49.803 "data_offset": 0, 00:13:49.803 "data_size": 63488 00:13:49.803 }, 00:13:49.803 { 00:13:49.803 "name": "BaseBdev2", 00:13:49.803 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:49.803 "is_configured": true, 00:13:49.803 "data_offset": 2048, 00:13:49.803 "data_size": 63488 00:13:49.803 } 00:13:49.803 ] 00:13:49.803 }' 00:13:49.803 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.803 17:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.370 "name": "raid_bdev1", 00:13:50.370 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:50.370 "strip_size_kb": 0, 00:13:50.370 "state": "online", 00:13:50.370 "raid_level": "raid1", 00:13:50.370 "superblock": true, 00:13:50.370 "num_base_bdevs": 2, 00:13:50.370 "num_base_bdevs_discovered": 1, 00:13:50.370 "num_base_bdevs_operational": 1, 00:13:50.370 "base_bdevs_list": [ 00:13:50.370 { 00:13:50.370 "name": null, 00:13:50.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.370 "is_configured": false, 00:13:50.370 "data_offset": 0, 00:13:50.370 "data_size": 63488 00:13:50.370 }, 00:13:50.370 { 00:13:50.370 "name": "BaseBdev2", 00:13:50.370 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:50.370 "is_configured": true, 00:13:50.370 "data_offset": 2048, 00:13:50.370 "data_size": 63488 00:13:50.370 } 00:13:50.370 ] 00:13:50.370 }' 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.370 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.370 [2024-11-26 17:58:32.176196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:50.370 [2024-11-26 17:58:32.176273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.370 [2024-11-26 17:58:32.176305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:50.370 [2024-11-26 17:58:32.176324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.370 [2024-11-26 17:58:32.176885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.370 [2024-11-26 17:58:32.176921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:50.370 [2024-11-26 17:58:32.177051] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:50.370 [2024-11-26 17:58:32.177075] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:50.370 [2024-11-26 17:58:32.177084] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:50.371 [2024-11-26 17:58:32.177102] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:50.371 BaseBdev1 00:13:50.371 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.371 17:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.749 "name": "raid_bdev1", 00:13:51.749 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:51.749 "strip_size_kb": 0, 00:13:51.749 "state": "online", 00:13:51.749 "raid_level": "raid1", 00:13:51.749 "superblock": true, 00:13:51.749 "num_base_bdevs": 2, 00:13:51.749 "num_base_bdevs_discovered": 1, 00:13:51.749 "num_base_bdevs_operational": 1, 00:13:51.749 "base_bdevs_list": [ 00:13:51.749 { 00:13:51.749 "name": null, 00:13:51.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.749 "is_configured": false, 00:13:51.749 "data_offset": 0, 00:13:51.749 "data_size": 63488 00:13:51.749 }, 00:13:51.749 { 00:13:51.749 "name": "BaseBdev2", 00:13:51.749 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:51.749 "is_configured": true, 00:13:51.749 "data_offset": 2048, 00:13:51.749 "data_size": 63488 00:13:51.749 } 00:13:51.749 ] 00:13:51.749 }' 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.749 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.008 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.008 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.008 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.008 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.008 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.008 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.008 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.008 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.008 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.008 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.008 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.008 "name": "raid_bdev1", 00:13:52.008 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:52.008 "strip_size_kb": 0, 00:13:52.008 "state": "online", 00:13:52.008 "raid_level": "raid1", 00:13:52.008 "superblock": true, 00:13:52.008 "num_base_bdevs": 2, 00:13:52.008 "num_base_bdevs_discovered": 1, 00:13:52.008 "num_base_bdevs_operational": 1, 00:13:52.008 "base_bdevs_list": [ 00:13:52.008 { 00:13:52.008 "name": null, 00:13:52.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.008 "is_configured": false, 00:13:52.008 "data_offset": 0, 00:13:52.008 "data_size": 63488 00:13:52.008 }, 00:13:52.008 { 00:13:52.008 "name": "BaseBdev2", 00:13:52.008 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:52.008 "is_configured": true, 00:13:52.008 "data_offset": 2048, 00:13:52.008 "data_size": 63488 00:13:52.008 } 00:13:52.008 ] 00:13:52.008 }' 00:13:52.008 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.008 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.008 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.008 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.009 [2024-11-26 17:58:33.825665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.009 [2024-11-26 17:58:33.825859] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:52.009 [2024-11-26 17:58:33.825881] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:52.009 request: 00:13:52.009 { 00:13:52.009 "base_bdev": "BaseBdev1", 00:13:52.009 "raid_bdev": "raid_bdev1", 00:13:52.009 "method": "bdev_raid_add_base_bdev", 00:13:52.009 "req_id": 1 00:13:52.009 } 00:13:52.009 Got JSON-RPC error response 00:13:52.009 response: 00:13:52.009 { 00:13:52.009 "code": -22, 00:13:52.009 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:52.009 } 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:52.009 17:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.387 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.387 "name": "raid_bdev1", 00:13:53.387 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:53.387 "strip_size_kb": 0, 00:13:53.387 "state": "online", 00:13:53.387 "raid_level": "raid1", 00:13:53.387 "superblock": true, 00:13:53.387 "num_base_bdevs": 2, 00:13:53.387 "num_base_bdevs_discovered": 1, 00:13:53.387 "num_base_bdevs_operational": 1, 00:13:53.387 "base_bdevs_list": [ 00:13:53.387 { 00:13:53.387 "name": null, 00:13:53.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.387 "is_configured": false, 00:13:53.388 "data_offset": 0, 00:13:53.388 "data_size": 63488 00:13:53.388 }, 00:13:53.388 { 00:13:53.388 "name": "BaseBdev2", 00:13:53.388 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:53.388 "is_configured": true, 00:13:53.388 "data_offset": 2048, 00:13:53.388 "data_size": 63488 00:13:53.388 } 00:13:53.388 ] 00:13:53.388 }' 00:13:53.388 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.388 17:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.647 "name": "raid_bdev1", 00:13:53.647 "uuid": "f0dc530d-fddb-4b18-a7b6-75d0d6fb5f55", 00:13:53.647 "strip_size_kb": 0, 00:13:53.647 "state": "online", 00:13:53.647 "raid_level": "raid1", 00:13:53.647 "superblock": true, 00:13:53.647 "num_base_bdevs": 2, 00:13:53.647 "num_base_bdevs_discovered": 1, 00:13:53.647 "num_base_bdevs_operational": 1, 00:13:53.647 "base_bdevs_list": [ 00:13:53.647 { 00:13:53.647 "name": null, 00:13:53.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.647 "is_configured": false, 00:13:53.647 "data_offset": 0, 00:13:53.647 "data_size": 63488 00:13:53.647 }, 00:13:53.647 { 00:13:53.647 "name": "BaseBdev2", 00:13:53.647 "uuid": "867aff2a-e18f-5946-a867-979858eb3ecc", 00:13:53.647 "is_configured": true, 00:13:53.647 "data_offset": 2048, 00:13:53.647 "data_size": 63488 00:13:53.647 } 00:13:53.647 ] 00:13:53.647 }' 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77193 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77193 ']' 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77193 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77193 00:13:53.647 killing process with pid 77193 00:13:53.647 Received shutdown signal, test time was about 17.556438 seconds 00:13:53.647 00:13:53.647 Latency(us) 00:13:53.647 [2024-11-26T17:58:35.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.647 [2024-11-26T17:58:35.510Z] =================================================================================================================== 00:13:53.647 [2024-11-26T17:58:35.510Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77193' 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77193 00:13:53.647 [2024-11-26 17:58:35.489668] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.647 17:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77193 00:13:53.647 [2024-11-26 17:58:35.489815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.647 [2024-11-26 17:58:35.489880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.647 [2024-11-26 17:58:35.489891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:53.905 [2024-11-26 17:58:35.741452] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:55.324 17:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:55.324 00:13:55.324 real 0m21.068s 00:13:55.324 user 0m27.755s 00:13:55.324 sys 0m2.249s 00:13:55.324 17:58:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.324 17:58:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.324 ************************************ 00:13:55.324 END TEST raid_rebuild_test_sb_io 00:13:55.324 ************************************ 00:13:55.584 17:58:37 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:55.584 17:58:37 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:55.584 17:58:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:55.584 17:58:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.584 17:58:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:55.584 ************************************ 00:13:55.584 START TEST raid_rebuild_test 00:13:55.584 ************************************ 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77895 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77895 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77895 ']' 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:55.584 17:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.584 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:55.584 Zero copy mechanism will not be used. 00:13:55.584 [2024-11-26 17:58:37.342454] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:13:55.584 [2024-11-26 17:58:37.342596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77895 ] 00:13:55.843 [2024-11-26 17:58:37.525883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.843 [2024-11-26 17:58:37.661740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.103 [2024-11-26 17:58:37.887916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.103 [2024-11-26 17:58:37.887953] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.673 BaseBdev1_malloc 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.673 [2024-11-26 17:58:38.339243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:56.673 [2024-11-26 17:58:38.339304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.673 [2024-11-26 17:58:38.339327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:56.673 [2024-11-26 17:58:38.339338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.673 [2024-11-26 17:58:38.341450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.673 [2024-11-26 17:58:38.341494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:56.673 BaseBdev1 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.673 BaseBdev2_malloc 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.673 [2024-11-26 17:58:38.396958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:56.673 [2024-11-26 17:58:38.397045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.673 [2024-11-26 17:58:38.397073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:56.673 [2024-11-26 17:58:38.397086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.673 [2024-11-26 17:58:38.399560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.673 [2024-11-26 17:58:38.399603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:56.673 BaseBdev2 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.673 BaseBdev3_malloc 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:56.673 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.674 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.674 [2024-11-26 17:58:38.470977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:56.674 [2024-11-26 17:58:38.471053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.674 [2024-11-26 17:58:38.471079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:56.674 [2024-11-26 17:58:38.471093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.674 [2024-11-26 17:58:38.473429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.674 [2024-11-26 17:58:38.473472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:56.674 BaseBdev3 00:13:56.674 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.674 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:56.674 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:56.674 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.674 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.674 BaseBdev4_malloc 00:13:56.674 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.674 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:56.674 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.674 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.674 [2024-11-26 17:58:38.530628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:56.674 [2024-11-26 17:58:38.530689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.674 [2024-11-26 17:58:38.530712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:56.674 [2024-11-26 17:58:38.530723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.674 [2024-11-26 17:58:38.532942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.674 [2024-11-26 17:58:38.532986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:56.934 BaseBdev4 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.935 spare_malloc 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.935 spare_delay 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.935 [2024-11-26 17:58:38.603336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:56.935 [2024-11-26 17:58:38.603395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.935 [2024-11-26 17:58:38.603416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:56.935 [2024-11-26 17:58:38.603428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.935 [2024-11-26 17:58:38.605737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.935 [2024-11-26 17:58:38.605780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:56.935 spare 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.935 [2024-11-26 17:58:38.615363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.935 [2024-11-26 17:58:38.617374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.935 [2024-11-26 17:58:38.617452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.935 [2024-11-26 17:58:38.617514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:56.935 [2024-11-26 17:58:38.617604] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:56.935 [2024-11-26 17:58:38.617621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:56.935 [2024-11-26 17:58:38.617919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:56.935 [2024-11-26 17:58:38.618162] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:56.935 [2024-11-26 17:58:38.618189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:56.935 [2024-11-26 17:58:38.618368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.935 "name": "raid_bdev1", 00:13:56.935 "uuid": "020e93ea-7837-4e46-8b99-76a0ca89910b", 00:13:56.935 "strip_size_kb": 0, 00:13:56.935 "state": "online", 00:13:56.935 "raid_level": "raid1", 00:13:56.935 "superblock": false, 00:13:56.935 "num_base_bdevs": 4, 00:13:56.935 "num_base_bdevs_discovered": 4, 00:13:56.935 "num_base_bdevs_operational": 4, 00:13:56.935 "base_bdevs_list": [ 00:13:56.935 { 00:13:56.935 "name": "BaseBdev1", 00:13:56.935 "uuid": "30cf7815-2eef-578c-a7ba-145d210a5954", 00:13:56.935 "is_configured": true, 00:13:56.935 "data_offset": 0, 00:13:56.935 "data_size": 65536 00:13:56.935 }, 00:13:56.935 { 00:13:56.935 "name": "BaseBdev2", 00:13:56.935 "uuid": "ad925d90-732f-5630-8d9d-a7ac37c60d83", 00:13:56.935 "is_configured": true, 00:13:56.935 "data_offset": 0, 00:13:56.935 "data_size": 65536 00:13:56.935 }, 00:13:56.935 { 00:13:56.935 "name": "BaseBdev3", 00:13:56.935 "uuid": "ef478c96-9526-5aec-82dc-0a26287ccb5c", 00:13:56.935 "is_configured": true, 00:13:56.935 "data_offset": 0, 00:13:56.935 "data_size": 65536 00:13:56.935 }, 00:13:56.935 { 00:13:56.935 "name": "BaseBdev4", 00:13:56.935 "uuid": "86d4d514-3933-5cc5-a505-89c3919c4bee", 00:13:56.935 "is_configured": true, 00:13:56.935 "data_offset": 0, 00:13:56.935 "data_size": 65536 00:13:56.935 } 00:13:56.935 ] 00:13:56.935 }' 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.935 17:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:57.503 [2024-11-26 17:58:39.087045] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.503 17:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:57.762 [2024-11-26 17:58:39.390183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:57.762 /dev/nbd0 00:13:57.762 17:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:57.762 17:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:57.762 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:57.762 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:57.762 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:57.762 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:57.763 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:57.763 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:57.763 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:57.763 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:57.763 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.763 1+0 records in 00:13:57.763 1+0 records out 00:13:57.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394719 s, 10.4 MB/s 00:13:57.763 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.763 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:57.763 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.763 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:57.763 17:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:57.763 17:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.763 17:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.763 17:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:57.763 17:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:57.763 17:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:04.333 65536+0 records in 00:14:04.333 65536+0 records out 00:14:04.333 33554432 bytes (34 MB, 32 MiB) copied, 6.11645 s, 5.5 MB/s 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:04.333 [2024-11-26 17:58:45.842805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.333 [2024-11-26 17:58:45.874881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.333 "name": "raid_bdev1", 00:14:04.333 "uuid": "020e93ea-7837-4e46-8b99-76a0ca89910b", 00:14:04.333 "strip_size_kb": 0, 00:14:04.333 "state": "online", 00:14:04.333 "raid_level": "raid1", 00:14:04.333 "superblock": false, 00:14:04.333 "num_base_bdevs": 4, 00:14:04.333 "num_base_bdevs_discovered": 3, 00:14:04.333 "num_base_bdevs_operational": 3, 00:14:04.333 "base_bdevs_list": [ 00:14:04.333 { 00:14:04.333 "name": null, 00:14:04.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.333 "is_configured": false, 00:14:04.333 "data_offset": 0, 00:14:04.333 "data_size": 65536 00:14:04.333 }, 00:14:04.333 { 00:14:04.333 "name": "BaseBdev2", 00:14:04.333 "uuid": "ad925d90-732f-5630-8d9d-a7ac37c60d83", 00:14:04.333 "is_configured": true, 00:14:04.333 "data_offset": 0, 00:14:04.333 "data_size": 65536 00:14:04.333 }, 00:14:04.333 { 00:14:04.333 "name": "BaseBdev3", 00:14:04.333 "uuid": "ef478c96-9526-5aec-82dc-0a26287ccb5c", 00:14:04.333 "is_configured": true, 00:14:04.333 "data_offset": 0, 00:14:04.333 "data_size": 65536 00:14:04.333 }, 00:14:04.333 { 00:14:04.333 "name": "BaseBdev4", 00:14:04.333 "uuid": "86d4d514-3933-5cc5-a505-89c3919c4bee", 00:14:04.333 "is_configured": true, 00:14:04.333 "data_offset": 0, 00:14:04.333 "data_size": 65536 00:14:04.333 } 00:14:04.333 ] 00:14:04.333 }' 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.333 17:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.592 17:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:04.592 17:58:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.592 17:58:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.592 [2024-11-26 17:58:46.362094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.592 [2024-11-26 17:58:46.381344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:04.592 17:58:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.592 17:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:04.592 [2024-11-26 17:58:46.383568] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:05.526 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.526 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.526 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.526 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.785 "name": "raid_bdev1", 00:14:05.785 "uuid": "020e93ea-7837-4e46-8b99-76a0ca89910b", 00:14:05.785 "strip_size_kb": 0, 00:14:05.785 "state": "online", 00:14:05.785 "raid_level": "raid1", 00:14:05.785 "superblock": false, 00:14:05.785 "num_base_bdevs": 4, 00:14:05.785 "num_base_bdevs_discovered": 4, 00:14:05.785 "num_base_bdevs_operational": 4, 00:14:05.785 "process": { 00:14:05.785 "type": "rebuild", 00:14:05.785 "target": "spare", 00:14:05.785 "progress": { 00:14:05.785 "blocks": 20480, 00:14:05.785 "percent": 31 00:14:05.785 } 00:14:05.785 }, 00:14:05.785 "base_bdevs_list": [ 00:14:05.785 { 00:14:05.785 "name": "spare", 00:14:05.785 "uuid": "52d0c425-0f11-53d4-8984-e99a42728f14", 00:14:05.785 "is_configured": true, 00:14:05.785 "data_offset": 0, 00:14:05.785 "data_size": 65536 00:14:05.785 }, 00:14:05.785 { 00:14:05.785 "name": "BaseBdev2", 00:14:05.785 "uuid": "ad925d90-732f-5630-8d9d-a7ac37c60d83", 00:14:05.785 "is_configured": true, 00:14:05.785 "data_offset": 0, 00:14:05.785 "data_size": 65536 00:14:05.785 }, 00:14:05.785 { 00:14:05.785 "name": "BaseBdev3", 00:14:05.785 "uuid": "ef478c96-9526-5aec-82dc-0a26287ccb5c", 00:14:05.785 "is_configured": true, 00:14:05.785 "data_offset": 0, 00:14:05.785 "data_size": 65536 00:14:05.785 }, 00:14:05.785 { 00:14:05.785 "name": "BaseBdev4", 00:14:05.785 "uuid": "86d4d514-3933-5cc5-a505-89c3919c4bee", 00:14:05.785 "is_configured": true, 00:14:05.785 "data_offset": 0, 00:14:05.785 "data_size": 65536 00:14:05.785 } 00:14:05.785 ] 00:14:05.785 }' 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.785 [2024-11-26 17:58:47.542325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.785 [2024-11-26 17:58:47.590182] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:05.785 [2024-11-26 17:58:47.590283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.785 [2024-11-26 17:58:47.590305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.785 [2024-11-26 17:58:47.590317] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.785 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.786 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.786 17:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.786 17:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.786 17:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.045 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.045 "name": "raid_bdev1", 00:14:06.045 "uuid": "020e93ea-7837-4e46-8b99-76a0ca89910b", 00:14:06.045 "strip_size_kb": 0, 00:14:06.045 "state": "online", 00:14:06.045 "raid_level": "raid1", 00:14:06.045 "superblock": false, 00:14:06.045 "num_base_bdevs": 4, 00:14:06.045 "num_base_bdevs_discovered": 3, 00:14:06.045 "num_base_bdevs_operational": 3, 00:14:06.045 "base_bdevs_list": [ 00:14:06.045 { 00:14:06.045 "name": null, 00:14:06.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.045 "is_configured": false, 00:14:06.045 "data_offset": 0, 00:14:06.045 "data_size": 65536 00:14:06.045 }, 00:14:06.045 { 00:14:06.045 "name": "BaseBdev2", 00:14:06.045 "uuid": "ad925d90-732f-5630-8d9d-a7ac37c60d83", 00:14:06.045 "is_configured": true, 00:14:06.045 "data_offset": 0, 00:14:06.045 "data_size": 65536 00:14:06.045 }, 00:14:06.045 { 00:14:06.045 "name": "BaseBdev3", 00:14:06.045 "uuid": "ef478c96-9526-5aec-82dc-0a26287ccb5c", 00:14:06.045 "is_configured": true, 00:14:06.045 "data_offset": 0, 00:14:06.045 "data_size": 65536 00:14:06.045 }, 00:14:06.045 { 00:14:06.045 "name": "BaseBdev4", 00:14:06.045 "uuid": "86d4d514-3933-5cc5-a505-89c3919c4bee", 00:14:06.045 "is_configured": true, 00:14:06.045 "data_offset": 0, 00:14:06.045 "data_size": 65536 00:14:06.045 } 00:14:06.045 ] 00:14:06.045 }' 00:14:06.045 17:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.045 17:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.305 17:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.305 17:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.305 17:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.305 17:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.305 17:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.305 17:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.305 17:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.305 17:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.305 17:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.305 17:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.305 17:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.305 "name": "raid_bdev1", 00:14:06.305 "uuid": "020e93ea-7837-4e46-8b99-76a0ca89910b", 00:14:06.305 "strip_size_kb": 0, 00:14:06.305 "state": "online", 00:14:06.305 "raid_level": "raid1", 00:14:06.305 "superblock": false, 00:14:06.305 "num_base_bdevs": 4, 00:14:06.305 "num_base_bdevs_discovered": 3, 00:14:06.305 "num_base_bdevs_operational": 3, 00:14:06.305 "base_bdevs_list": [ 00:14:06.305 { 00:14:06.305 "name": null, 00:14:06.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.305 "is_configured": false, 00:14:06.305 "data_offset": 0, 00:14:06.305 "data_size": 65536 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "name": "BaseBdev2", 00:14:06.305 "uuid": "ad925d90-732f-5630-8d9d-a7ac37c60d83", 00:14:06.305 "is_configured": true, 00:14:06.305 "data_offset": 0, 00:14:06.305 "data_size": 65536 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "name": "BaseBdev3", 00:14:06.305 "uuid": "ef478c96-9526-5aec-82dc-0a26287ccb5c", 00:14:06.305 "is_configured": true, 00:14:06.305 "data_offset": 0, 00:14:06.305 "data_size": 65536 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "name": "BaseBdev4", 00:14:06.305 "uuid": "86d4d514-3933-5cc5-a505-89c3919c4bee", 00:14:06.305 "is_configured": true, 00:14:06.305 "data_offset": 0, 00:14:06.305 "data_size": 65536 00:14:06.305 } 00:14:06.305 ] 00:14:06.305 }' 00:14:06.305 17:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.305 17:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.565 17:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.565 17:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.565 17:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:06.565 17:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.565 17:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.565 [2024-11-26 17:58:48.215702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.565 [2024-11-26 17:58:48.233008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:06.565 17:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.565 17:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:06.565 [2024-11-26 17:58:48.235381] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:07.507 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.507 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.507 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.507 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.507 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.507 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.507 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.507 17:58:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.507 17:58:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.507 17:58:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.507 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.507 "name": "raid_bdev1", 00:14:07.507 "uuid": "020e93ea-7837-4e46-8b99-76a0ca89910b", 00:14:07.507 "strip_size_kb": 0, 00:14:07.507 "state": "online", 00:14:07.507 "raid_level": "raid1", 00:14:07.507 "superblock": false, 00:14:07.507 "num_base_bdevs": 4, 00:14:07.507 "num_base_bdevs_discovered": 4, 00:14:07.507 "num_base_bdevs_operational": 4, 00:14:07.507 "process": { 00:14:07.507 "type": "rebuild", 00:14:07.507 "target": "spare", 00:14:07.507 "progress": { 00:14:07.507 "blocks": 20480, 00:14:07.507 "percent": 31 00:14:07.507 } 00:14:07.507 }, 00:14:07.507 "base_bdevs_list": [ 00:14:07.507 { 00:14:07.507 "name": "spare", 00:14:07.507 "uuid": "52d0c425-0f11-53d4-8984-e99a42728f14", 00:14:07.507 "is_configured": true, 00:14:07.507 "data_offset": 0, 00:14:07.507 "data_size": 65536 00:14:07.507 }, 00:14:07.507 { 00:14:07.507 "name": "BaseBdev2", 00:14:07.507 "uuid": "ad925d90-732f-5630-8d9d-a7ac37c60d83", 00:14:07.507 "is_configured": true, 00:14:07.507 "data_offset": 0, 00:14:07.507 "data_size": 65536 00:14:07.507 }, 00:14:07.507 { 00:14:07.507 "name": "BaseBdev3", 00:14:07.507 "uuid": "ef478c96-9526-5aec-82dc-0a26287ccb5c", 00:14:07.507 "is_configured": true, 00:14:07.507 "data_offset": 0, 00:14:07.507 "data_size": 65536 00:14:07.507 }, 00:14:07.507 { 00:14:07.507 "name": "BaseBdev4", 00:14:07.507 "uuid": "86d4d514-3933-5cc5-a505-89c3919c4bee", 00:14:07.507 "is_configured": true, 00:14:07.507 "data_offset": 0, 00:14:07.507 "data_size": 65536 00:14:07.507 } 00:14:07.507 ] 00:14:07.507 }' 00:14:07.507 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.507 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.507 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.766 [2024-11-26 17:58:49.378215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:07.766 [2024-11-26 17:58:49.442007] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.766 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.766 "name": "raid_bdev1", 00:14:07.766 "uuid": "020e93ea-7837-4e46-8b99-76a0ca89910b", 00:14:07.766 "strip_size_kb": 0, 00:14:07.766 "state": "online", 00:14:07.766 "raid_level": "raid1", 00:14:07.766 "superblock": false, 00:14:07.766 "num_base_bdevs": 4, 00:14:07.766 "num_base_bdevs_discovered": 3, 00:14:07.766 "num_base_bdevs_operational": 3, 00:14:07.766 "process": { 00:14:07.766 "type": "rebuild", 00:14:07.766 "target": "spare", 00:14:07.766 "progress": { 00:14:07.766 "blocks": 24576, 00:14:07.766 "percent": 37 00:14:07.766 } 00:14:07.766 }, 00:14:07.766 "base_bdevs_list": [ 00:14:07.766 { 00:14:07.766 "name": "spare", 00:14:07.766 "uuid": "52d0c425-0f11-53d4-8984-e99a42728f14", 00:14:07.766 "is_configured": true, 00:14:07.766 "data_offset": 0, 00:14:07.766 "data_size": 65536 00:14:07.766 }, 00:14:07.766 { 00:14:07.766 "name": null, 00:14:07.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.766 "is_configured": false, 00:14:07.766 "data_offset": 0, 00:14:07.766 "data_size": 65536 00:14:07.766 }, 00:14:07.766 { 00:14:07.766 "name": "BaseBdev3", 00:14:07.766 "uuid": "ef478c96-9526-5aec-82dc-0a26287ccb5c", 00:14:07.766 "is_configured": true, 00:14:07.766 "data_offset": 0, 00:14:07.766 "data_size": 65536 00:14:07.766 }, 00:14:07.766 { 00:14:07.766 "name": "BaseBdev4", 00:14:07.766 "uuid": "86d4d514-3933-5cc5-a505-89c3919c4bee", 00:14:07.766 "is_configured": true, 00:14:07.766 "data_offset": 0, 00:14:07.766 "data_size": 65536 00:14:07.767 } 00:14:07.767 ] 00:14:07.767 }' 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=470 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.767 17:58:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.026 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.026 "name": "raid_bdev1", 00:14:08.026 "uuid": "020e93ea-7837-4e46-8b99-76a0ca89910b", 00:14:08.026 "strip_size_kb": 0, 00:14:08.026 "state": "online", 00:14:08.026 "raid_level": "raid1", 00:14:08.026 "superblock": false, 00:14:08.026 "num_base_bdevs": 4, 00:14:08.026 "num_base_bdevs_discovered": 3, 00:14:08.026 "num_base_bdevs_operational": 3, 00:14:08.026 "process": { 00:14:08.026 "type": "rebuild", 00:14:08.026 "target": "spare", 00:14:08.026 "progress": { 00:14:08.026 "blocks": 26624, 00:14:08.026 "percent": 40 00:14:08.026 } 00:14:08.026 }, 00:14:08.026 "base_bdevs_list": [ 00:14:08.026 { 00:14:08.026 "name": "spare", 00:14:08.026 "uuid": "52d0c425-0f11-53d4-8984-e99a42728f14", 00:14:08.026 "is_configured": true, 00:14:08.026 "data_offset": 0, 00:14:08.026 "data_size": 65536 00:14:08.026 }, 00:14:08.026 { 00:14:08.026 "name": null, 00:14:08.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.026 "is_configured": false, 00:14:08.026 "data_offset": 0, 00:14:08.026 "data_size": 65536 00:14:08.026 }, 00:14:08.026 { 00:14:08.026 "name": "BaseBdev3", 00:14:08.026 "uuid": "ef478c96-9526-5aec-82dc-0a26287ccb5c", 00:14:08.026 "is_configured": true, 00:14:08.026 "data_offset": 0, 00:14:08.026 "data_size": 65536 00:14:08.026 }, 00:14:08.026 { 00:14:08.026 "name": "BaseBdev4", 00:14:08.026 "uuid": "86d4d514-3933-5cc5-a505-89c3919c4bee", 00:14:08.026 "is_configured": true, 00:14:08.026 "data_offset": 0, 00:14:08.026 "data_size": 65536 00:14:08.026 } 00:14:08.026 ] 00:14:08.026 }' 00:14:08.026 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.026 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.026 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.026 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.026 17:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:08.967 17:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:08.967 17:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.967 17:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.967 17:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.967 17:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.967 17:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.967 17:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.967 17:58:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.967 17:58:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.967 17:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.967 17:58:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.967 17:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.967 "name": "raid_bdev1", 00:14:08.967 "uuid": "020e93ea-7837-4e46-8b99-76a0ca89910b", 00:14:08.967 "strip_size_kb": 0, 00:14:08.967 "state": "online", 00:14:08.967 "raid_level": "raid1", 00:14:08.967 "superblock": false, 00:14:08.967 "num_base_bdevs": 4, 00:14:08.967 "num_base_bdevs_discovered": 3, 00:14:08.967 "num_base_bdevs_operational": 3, 00:14:08.967 "process": { 00:14:08.967 "type": "rebuild", 00:14:08.967 "target": "spare", 00:14:08.967 "progress": { 00:14:08.967 "blocks": 49152, 00:14:08.967 "percent": 75 00:14:08.967 } 00:14:08.967 }, 00:14:08.967 "base_bdevs_list": [ 00:14:08.967 { 00:14:08.967 "name": "spare", 00:14:08.967 "uuid": "52d0c425-0f11-53d4-8984-e99a42728f14", 00:14:08.967 "is_configured": true, 00:14:08.967 "data_offset": 0, 00:14:08.967 "data_size": 65536 00:14:08.967 }, 00:14:08.967 { 00:14:08.967 "name": null, 00:14:08.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.967 "is_configured": false, 00:14:08.967 "data_offset": 0, 00:14:08.967 "data_size": 65536 00:14:08.967 }, 00:14:08.967 { 00:14:08.967 "name": "BaseBdev3", 00:14:08.967 "uuid": "ef478c96-9526-5aec-82dc-0a26287ccb5c", 00:14:08.967 "is_configured": true, 00:14:08.967 "data_offset": 0, 00:14:08.967 "data_size": 65536 00:14:08.967 }, 00:14:08.967 { 00:14:08.967 "name": "BaseBdev4", 00:14:08.967 "uuid": "86d4d514-3933-5cc5-a505-89c3919c4bee", 00:14:08.967 "is_configured": true, 00:14:08.967 "data_offset": 0, 00:14:08.967 "data_size": 65536 00:14:08.967 } 00:14:08.967 ] 00:14:08.967 }' 00:14:08.967 17:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.227 17:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.227 17:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.227 17:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.227 17:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.796 [2024-11-26 17:58:51.452918] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:09.796 [2024-11-26 17:58:51.453029] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:09.796 [2024-11-26 17:58:51.453098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.056 17:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.056 17:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.056 17:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.056 17:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.056 17:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.056 17:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.056 17:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.056 17:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.056 17:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.056 17:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.056 17:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.316 17:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.316 "name": "raid_bdev1", 00:14:10.316 "uuid": "020e93ea-7837-4e46-8b99-76a0ca89910b", 00:14:10.316 "strip_size_kb": 0, 00:14:10.316 "state": "online", 00:14:10.316 "raid_level": "raid1", 00:14:10.316 "superblock": false, 00:14:10.316 "num_base_bdevs": 4, 00:14:10.316 "num_base_bdevs_discovered": 3, 00:14:10.316 "num_base_bdevs_operational": 3, 00:14:10.316 "base_bdevs_list": [ 00:14:10.316 { 00:14:10.316 "name": "spare", 00:14:10.316 "uuid": "52d0c425-0f11-53d4-8984-e99a42728f14", 00:14:10.316 "is_configured": true, 00:14:10.316 "data_offset": 0, 00:14:10.316 "data_size": 65536 00:14:10.316 }, 00:14:10.316 { 00:14:10.316 "name": null, 00:14:10.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.316 "is_configured": false, 00:14:10.316 "data_offset": 0, 00:14:10.316 "data_size": 65536 00:14:10.316 }, 00:14:10.316 { 00:14:10.316 "name": "BaseBdev3", 00:14:10.316 "uuid": "ef478c96-9526-5aec-82dc-0a26287ccb5c", 00:14:10.316 "is_configured": true, 00:14:10.316 "data_offset": 0, 00:14:10.316 "data_size": 65536 00:14:10.316 }, 00:14:10.316 { 00:14:10.316 "name": "BaseBdev4", 00:14:10.316 "uuid": "86d4d514-3933-5cc5-a505-89c3919c4bee", 00:14:10.316 "is_configured": true, 00:14:10.316 "data_offset": 0, 00:14:10.316 "data_size": 65536 00:14:10.316 } 00:14:10.316 ] 00:14:10.316 }' 00:14:10.316 17:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.316 17:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:10.316 17:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.316 "name": "raid_bdev1", 00:14:10.316 "uuid": "020e93ea-7837-4e46-8b99-76a0ca89910b", 00:14:10.316 "strip_size_kb": 0, 00:14:10.316 "state": "online", 00:14:10.316 "raid_level": "raid1", 00:14:10.316 "superblock": false, 00:14:10.316 "num_base_bdevs": 4, 00:14:10.316 "num_base_bdevs_discovered": 3, 00:14:10.316 "num_base_bdevs_operational": 3, 00:14:10.316 "base_bdevs_list": [ 00:14:10.316 { 00:14:10.316 "name": "spare", 00:14:10.316 "uuid": "52d0c425-0f11-53d4-8984-e99a42728f14", 00:14:10.316 "is_configured": true, 00:14:10.316 "data_offset": 0, 00:14:10.316 "data_size": 65536 00:14:10.316 }, 00:14:10.316 { 00:14:10.316 "name": null, 00:14:10.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.316 "is_configured": false, 00:14:10.316 "data_offset": 0, 00:14:10.316 "data_size": 65536 00:14:10.316 }, 00:14:10.316 { 00:14:10.316 "name": "BaseBdev3", 00:14:10.316 "uuid": "ef478c96-9526-5aec-82dc-0a26287ccb5c", 00:14:10.316 "is_configured": true, 00:14:10.316 "data_offset": 0, 00:14:10.316 "data_size": 65536 00:14:10.316 }, 00:14:10.316 { 00:14:10.316 "name": "BaseBdev4", 00:14:10.316 "uuid": "86d4d514-3933-5cc5-a505-89c3919c4bee", 00:14:10.316 "is_configured": true, 00:14:10.316 "data_offset": 0, 00:14:10.316 "data_size": 65536 00:14:10.316 } 00:14:10.316 ] 00:14:10.316 }' 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.316 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.575 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.575 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.575 "name": "raid_bdev1", 00:14:10.575 "uuid": "020e93ea-7837-4e46-8b99-76a0ca89910b", 00:14:10.575 "strip_size_kb": 0, 00:14:10.575 "state": "online", 00:14:10.575 "raid_level": "raid1", 00:14:10.575 "superblock": false, 00:14:10.575 "num_base_bdevs": 4, 00:14:10.575 "num_base_bdevs_discovered": 3, 00:14:10.575 "num_base_bdevs_operational": 3, 00:14:10.575 "base_bdevs_list": [ 00:14:10.575 { 00:14:10.575 "name": "spare", 00:14:10.575 "uuid": "52d0c425-0f11-53d4-8984-e99a42728f14", 00:14:10.575 "is_configured": true, 00:14:10.575 "data_offset": 0, 00:14:10.575 "data_size": 65536 00:14:10.575 }, 00:14:10.575 { 00:14:10.575 "name": null, 00:14:10.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.575 "is_configured": false, 00:14:10.575 "data_offset": 0, 00:14:10.575 "data_size": 65536 00:14:10.575 }, 00:14:10.575 { 00:14:10.575 "name": "BaseBdev3", 00:14:10.575 "uuid": "ef478c96-9526-5aec-82dc-0a26287ccb5c", 00:14:10.575 "is_configured": true, 00:14:10.575 "data_offset": 0, 00:14:10.575 "data_size": 65536 00:14:10.575 }, 00:14:10.575 { 00:14:10.575 "name": "BaseBdev4", 00:14:10.576 "uuid": "86d4d514-3933-5cc5-a505-89c3919c4bee", 00:14:10.576 "is_configured": true, 00:14:10.576 "data_offset": 0, 00:14:10.576 "data_size": 65536 00:14:10.576 } 00:14:10.576 ] 00:14:10.576 }' 00:14:10.576 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.576 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.836 [2024-11-26 17:58:52.634661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:10.836 [2024-11-26 17:58:52.634705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:10.836 [2024-11-26 17:58:52.634813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.836 [2024-11-26 17:58:52.634914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.836 [2024-11-26 17:58:52.634930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:10.836 17:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:11.097 /dev/nbd0 00:14:11.358 17:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:11.358 17:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:11.358 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:11.358 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:11.358 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:11.358 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:11.358 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:11.358 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:11.358 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:11.358 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:11.358 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.358 1+0 records in 00:14:11.358 1+0 records out 00:14:11.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475013 s, 8.6 MB/s 00:14:11.358 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.358 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:11.358 17:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.358 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:11.358 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:11.358 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.358 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:11.358 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:11.618 /dev/nbd1 00:14:11.618 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:11.618 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:11.618 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:11.618 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:11.618 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:11.618 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:11.618 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:11.618 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:11.618 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:11.618 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:11.618 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.618 1+0 records in 00:14:11.618 1+0 records out 00:14:11.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463712 s, 8.8 MB/s 00:14:11.619 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.619 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:11.619 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.619 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:11.619 17:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:11.619 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.619 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:11.619 17:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:11.878 17:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:11.878 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.878 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:11.878 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:11.878 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:11.878 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.878 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:12.137 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:12.137 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:12.137 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:12.137 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.137 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.137 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:12.137 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:12.137 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.137 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.137 17:58:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77895 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77895 ']' 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77895 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77895 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.397 killing process with pid 77895 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77895' 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77895 00:14:12.397 Received shutdown signal, test time was about 60.000000 seconds 00:14:12.397 00:14:12.397 Latency(us) 00:14:12.397 [2024-11-26T17:58:54.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.397 [2024-11-26T17:58:54.260Z] =================================================================================================================== 00:14:12.397 [2024-11-26T17:58:54.260Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:12.397 [2024-11-26 17:58:54.091282] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.397 17:58:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77895 00:14:12.966 [2024-11-26 17:58:54.671962] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:14.347 00:14:14.347 real 0m18.798s 00:14:14.347 user 0m21.144s 00:14:14.347 sys 0m3.282s 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.347 ************************************ 00:14:14.347 END TEST raid_rebuild_test 00:14:14.347 ************************************ 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.347 17:58:56 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:14.347 17:58:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:14.347 17:58:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.347 17:58:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:14.347 ************************************ 00:14:14.347 START TEST raid_rebuild_test_sb 00:14:14.347 ************************************ 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78351 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78351 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78351 ']' 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.347 17:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.347 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:14.347 Zero copy mechanism will not be used. 00:14:14.347 [2024-11-26 17:58:56.176171] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:14:14.347 [2024-11-26 17:58:56.176319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78351 ] 00:14:14.606 [2024-11-26 17:58:56.355853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.869 [2024-11-26 17:58:56.485032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.869 [2024-11-26 17:58:56.708628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.869 [2024-11-26 17:58:56.708707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.440 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.440 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:15.440 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:15.440 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:15.440 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.440 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.440 BaseBdev1_malloc 00:14:15.440 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.440 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:15.440 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.440 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.440 [2024-11-26 17:58:57.109925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:15.440 [2024-11-26 17:58:57.110000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.440 [2024-11-26 17:58:57.110037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:15.440 [2024-11-26 17:58:57.110052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.440 [2024-11-26 17:58:57.112274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.440 [2024-11-26 17:58:57.112317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:15.440 BaseBdev1 00:14:15.440 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.440 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:15.440 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:15.440 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.440 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.440 BaseBdev2_malloc 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.441 [2024-11-26 17:58:57.170741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:15.441 [2024-11-26 17:58:57.170833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.441 [2024-11-26 17:58:57.170860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:15.441 [2024-11-26 17:58:57.170873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.441 [2024-11-26 17:58:57.173425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.441 [2024-11-26 17:58:57.173483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:15.441 BaseBdev2 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.441 BaseBdev3_malloc 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.441 [2024-11-26 17:58:57.242047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:15.441 [2024-11-26 17:58:57.242126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.441 [2024-11-26 17:58:57.242155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:15.441 [2024-11-26 17:58:57.242167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.441 [2024-11-26 17:58:57.244627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.441 [2024-11-26 17:58:57.244678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:15.441 BaseBdev3 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.441 BaseBdev4_malloc 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.441 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.441 [2024-11-26 17:58:57.299431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:15.441 [2024-11-26 17:58:57.299505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.441 [2024-11-26 17:58:57.299532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:15.441 [2024-11-26 17:58:57.299545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.441 [2024-11-26 17:58:57.301971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.441 [2024-11-26 17:58:57.302035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:15.699 BaseBdev4 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.699 spare_malloc 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.699 spare_delay 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.699 [2024-11-26 17:58:57.371906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:15.699 [2024-11-26 17:58:57.371970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.699 [2024-11-26 17:58:57.371991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:15.699 [2024-11-26 17:58:57.372002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.699 [2024-11-26 17:58:57.374211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.699 [2024-11-26 17:58:57.374252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:15.699 spare 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.699 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.699 [2024-11-26 17:58:57.383941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.699 [2024-11-26 17:58:57.385902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.699 [2024-11-26 17:58:57.385981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.699 [2024-11-26 17:58:57.386055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:15.699 [2024-11-26 17:58:57.386264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:15.699 [2024-11-26 17:58:57.386289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:15.699 [2024-11-26 17:58:57.386630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:15.700 [2024-11-26 17:58:57.386848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:15.700 [2024-11-26 17:58:57.386868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:15.700 [2024-11-26 17:58:57.387085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.700 "name": "raid_bdev1", 00:14:15.700 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:15.700 "strip_size_kb": 0, 00:14:15.700 "state": "online", 00:14:15.700 "raid_level": "raid1", 00:14:15.700 "superblock": true, 00:14:15.700 "num_base_bdevs": 4, 00:14:15.700 "num_base_bdevs_discovered": 4, 00:14:15.700 "num_base_bdevs_operational": 4, 00:14:15.700 "base_bdevs_list": [ 00:14:15.700 { 00:14:15.700 "name": "BaseBdev1", 00:14:15.700 "uuid": "4ea57c80-a05c-5dea-874a-2d06acb2fa87", 00:14:15.700 "is_configured": true, 00:14:15.700 "data_offset": 2048, 00:14:15.700 "data_size": 63488 00:14:15.700 }, 00:14:15.700 { 00:14:15.700 "name": "BaseBdev2", 00:14:15.700 "uuid": "30fe5db5-5c0a-5f3d-b68a-c26d002f3a61", 00:14:15.700 "is_configured": true, 00:14:15.700 "data_offset": 2048, 00:14:15.700 "data_size": 63488 00:14:15.700 }, 00:14:15.700 { 00:14:15.700 "name": "BaseBdev3", 00:14:15.700 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:15.700 "is_configured": true, 00:14:15.700 "data_offset": 2048, 00:14:15.700 "data_size": 63488 00:14:15.700 }, 00:14:15.700 { 00:14:15.700 "name": "BaseBdev4", 00:14:15.700 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:15.700 "is_configured": true, 00:14:15.700 "data_offset": 2048, 00:14:15.700 "data_size": 63488 00:14:15.700 } 00:14:15.700 ] 00:14:15.700 }' 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.700 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.268 [2024-11-26 17:58:57.839544] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:16.268 17:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:16.526 [2024-11-26 17:58:58.182662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:16.526 /dev/nbd0 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:16.526 1+0 records in 00:14:16.526 1+0 records out 00:14:16.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439518 s, 9.3 MB/s 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:16.526 17:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:23.108 63488+0 records in 00:14:23.108 63488+0 records out 00:14:23.108 32505856 bytes (33 MB, 31 MiB) copied, 6.10757 s, 5.3 MB/s 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:23.108 [2024-11-26 17:59:04.583695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.108 [2024-11-26 17:59:04.620196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.108 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.109 17:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.109 17:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.109 17:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.109 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.109 "name": "raid_bdev1", 00:14:23.109 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:23.109 "strip_size_kb": 0, 00:14:23.109 "state": "online", 00:14:23.109 "raid_level": "raid1", 00:14:23.109 "superblock": true, 00:14:23.109 "num_base_bdevs": 4, 00:14:23.109 "num_base_bdevs_discovered": 3, 00:14:23.109 "num_base_bdevs_operational": 3, 00:14:23.109 "base_bdevs_list": [ 00:14:23.109 { 00:14:23.109 "name": null, 00:14:23.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.109 "is_configured": false, 00:14:23.109 "data_offset": 0, 00:14:23.109 "data_size": 63488 00:14:23.109 }, 00:14:23.109 { 00:14:23.109 "name": "BaseBdev2", 00:14:23.109 "uuid": "30fe5db5-5c0a-5f3d-b68a-c26d002f3a61", 00:14:23.109 "is_configured": true, 00:14:23.109 "data_offset": 2048, 00:14:23.109 "data_size": 63488 00:14:23.109 }, 00:14:23.109 { 00:14:23.109 "name": "BaseBdev3", 00:14:23.109 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:23.109 "is_configured": true, 00:14:23.109 "data_offset": 2048, 00:14:23.109 "data_size": 63488 00:14:23.109 }, 00:14:23.109 { 00:14:23.109 "name": "BaseBdev4", 00:14:23.109 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:23.109 "is_configured": true, 00:14:23.109 "data_offset": 2048, 00:14:23.109 "data_size": 63488 00:14:23.109 } 00:14:23.109 ] 00:14:23.109 }' 00:14:23.109 17:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.109 17:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.368 17:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:23.368 17:59:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.368 17:59:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.368 [2024-11-26 17:59:05.099349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:23.368 [2024-11-26 17:59:05.116354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:23.368 17:59:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.368 17:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:23.368 [2024-11-26 17:59:05.118552] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:24.307 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.307 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.307 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.307 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.307 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.307 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.307 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.307 17:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.307 17:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.307 17:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.567 "name": "raid_bdev1", 00:14:24.567 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:24.567 "strip_size_kb": 0, 00:14:24.567 "state": "online", 00:14:24.567 "raid_level": "raid1", 00:14:24.567 "superblock": true, 00:14:24.567 "num_base_bdevs": 4, 00:14:24.567 "num_base_bdevs_discovered": 4, 00:14:24.567 "num_base_bdevs_operational": 4, 00:14:24.567 "process": { 00:14:24.567 "type": "rebuild", 00:14:24.567 "target": "spare", 00:14:24.567 "progress": { 00:14:24.567 "blocks": 20480, 00:14:24.567 "percent": 32 00:14:24.567 } 00:14:24.567 }, 00:14:24.567 "base_bdevs_list": [ 00:14:24.567 { 00:14:24.567 "name": "spare", 00:14:24.567 "uuid": "e7420ced-446b-5380-83aa-d513a4157458", 00:14:24.567 "is_configured": true, 00:14:24.567 "data_offset": 2048, 00:14:24.567 "data_size": 63488 00:14:24.567 }, 00:14:24.567 { 00:14:24.567 "name": "BaseBdev2", 00:14:24.567 "uuid": "30fe5db5-5c0a-5f3d-b68a-c26d002f3a61", 00:14:24.567 "is_configured": true, 00:14:24.567 "data_offset": 2048, 00:14:24.567 "data_size": 63488 00:14:24.567 }, 00:14:24.567 { 00:14:24.567 "name": "BaseBdev3", 00:14:24.567 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:24.567 "is_configured": true, 00:14:24.567 "data_offset": 2048, 00:14:24.567 "data_size": 63488 00:14:24.567 }, 00:14:24.567 { 00:14:24.567 "name": "BaseBdev4", 00:14:24.567 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:24.567 "is_configured": true, 00:14:24.567 "data_offset": 2048, 00:14:24.567 "data_size": 63488 00:14:24.567 } 00:14:24.567 ] 00:14:24.567 }' 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.567 [2024-11-26 17:59:06.265195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.567 [2024-11-26 17:59:06.324790] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:24.567 [2024-11-26 17:59:06.324880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.567 [2024-11-26 17:59:06.324898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.567 [2024-11-26 17:59:06.324908] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.567 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.567 "name": "raid_bdev1", 00:14:24.567 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:24.567 "strip_size_kb": 0, 00:14:24.567 "state": "online", 00:14:24.567 "raid_level": "raid1", 00:14:24.567 "superblock": true, 00:14:24.567 "num_base_bdevs": 4, 00:14:24.567 "num_base_bdevs_discovered": 3, 00:14:24.567 "num_base_bdevs_operational": 3, 00:14:24.567 "base_bdevs_list": [ 00:14:24.567 { 00:14:24.567 "name": null, 00:14:24.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.567 "is_configured": false, 00:14:24.567 "data_offset": 0, 00:14:24.567 "data_size": 63488 00:14:24.567 }, 00:14:24.567 { 00:14:24.567 "name": "BaseBdev2", 00:14:24.567 "uuid": "30fe5db5-5c0a-5f3d-b68a-c26d002f3a61", 00:14:24.567 "is_configured": true, 00:14:24.567 "data_offset": 2048, 00:14:24.567 "data_size": 63488 00:14:24.567 }, 00:14:24.567 { 00:14:24.567 "name": "BaseBdev3", 00:14:24.567 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:24.567 "is_configured": true, 00:14:24.567 "data_offset": 2048, 00:14:24.567 "data_size": 63488 00:14:24.567 }, 00:14:24.567 { 00:14:24.568 "name": "BaseBdev4", 00:14:24.568 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:24.568 "is_configured": true, 00:14:24.568 "data_offset": 2048, 00:14:24.568 "data_size": 63488 00:14:24.568 } 00:14:24.568 ] 00:14:24.568 }' 00:14:24.568 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.568 17:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.135 "name": "raid_bdev1", 00:14:25.135 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:25.135 "strip_size_kb": 0, 00:14:25.135 "state": "online", 00:14:25.135 "raid_level": "raid1", 00:14:25.135 "superblock": true, 00:14:25.135 "num_base_bdevs": 4, 00:14:25.135 "num_base_bdevs_discovered": 3, 00:14:25.135 "num_base_bdevs_operational": 3, 00:14:25.135 "base_bdevs_list": [ 00:14:25.135 { 00:14:25.135 "name": null, 00:14:25.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.135 "is_configured": false, 00:14:25.135 "data_offset": 0, 00:14:25.135 "data_size": 63488 00:14:25.135 }, 00:14:25.135 { 00:14:25.135 "name": "BaseBdev2", 00:14:25.135 "uuid": "30fe5db5-5c0a-5f3d-b68a-c26d002f3a61", 00:14:25.135 "is_configured": true, 00:14:25.135 "data_offset": 2048, 00:14:25.135 "data_size": 63488 00:14:25.135 }, 00:14:25.135 { 00:14:25.135 "name": "BaseBdev3", 00:14:25.135 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:25.135 "is_configured": true, 00:14:25.135 "data_offset": 2048, 00:14:25.135 "data_size": 63488 00:14:25.135 }, 00:14:25.135 { 00:14:25.135 "name": "BaseBdev4", 00:14:25.135 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:25.135 "is_configured": true, 00:14:25.135 "data_offset": 2048, 00:14:25.135 "data_size": 63488 00:14:25.135 } 00:14:25.135 ] 00:14:25.135 }' 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.135 17:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.135 [2024-11-26 17:59:06.988615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.397 [2024-11-26 17:59:07.006476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:25.397 17:59:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.397 17:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:25.397 [2024-11-26 17:59:07.008706] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.333 "name": "raid_bdev1", 00:14:26.333 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:26.333 "strip_size_kb": 0, 00:14:26.333 "state": "online", 00:14:26.333 "raid_level": "raid1", 00:14:26.333 "superblock": true, 00:14:26.333 "num_base_bdevs": 4, 00:14:26.333 "num_base_bdevs_discovered": 4, 00:14:26.333 "num_base_bdevs_operational": 4, 00:14:26.333 "process": { 00:14:26.333 "type": "rebuild", 00:14:26.333 "target": "spare", 00:14:26.333 "progress": { 00:14:26.333 "blocks": 20480, 00:14:26.333 "percent": 32 00:14:26.333 } 00:14:26.333 }, 00:14:26.333 "base_bdevs_list": [ 00:14:26.333 { 00:14:26.333 "name": "spare", 00:14:26.333 "uuid": "e7420ced-446b-5380-83aa-d513a4157458", 00:14:26.333 "is_configured": true, 00:14:26.333 "data_offset": 2048, 00:14:26.333 "data_size": 63488 00:14:26.333 }, 00:14:26.333 { 00:14:26.333 "name": "BaseBdev2", 00:14:26.333 "uuid": "30fe5db5-5c0a-5f3d-b68a-c26d002f3a61", 00:14:26.333 "is_configured": true, 00:14:26.333 "data_offset": 2048, 00:14:26.333 "data_size": 63488 00:14:26.333 }, 00:14:26.333 { 00:14:26.333 "name": "BaseBdev3", 00:14:26.333 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:26.333 "is_configured": true, 00:14:26.333 "data_offset": 2048, 00:14:26.333 "data_size": 63488 00:14:26.333 }, 00:14:26.333 { 00:14:26.333 "name": "BaseBdev4", 00:14:26.333 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:26.333 "is_configured": true, 00:14:26.333 "data_offset": 2048, 00:14:26.333 "data_size": 63488 00:14:26.333 } 00:14:26.333 ] 00:14:26.333 }' 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:26.333 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.333 17:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.333 [2024-11-26 17:59:08.163920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:26.593 [2024-11-26 17:59:08.315024] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.593 "name": "raid_bdev1", 00:14:26.593 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:26.593 "strip_size_kb": 0, 00:14:26.593 "state": "online", 00:14:26.593 "raid_level": "raid1", 00:14:26.593 "superblock": true, 00:14:26.593 "num_base_bdevs": 4, 00:14:26.593 "num_base_bdevs_discovered": 3, 00:14:26.593 "num_base_bdevs_operational": 3, 00:14:26.593 "process": { 00:14:26.593 "type": "rebuild", 00:14:26.593 "target": "spare", 00:14:26.593 "progress": { 00:14:26.593 "blocks": 24576, 00:14:26.593 "percent": 38 00:14:26.593 } 00:14:26.593 }, 00:14:26.593 "base_bdevs_list": [ 00:14:26.593 { 00:14:26.593 "name": "spare", 00:14:26.593 "uuid": "e7420ced-446b-5380-83aa-d513a4157458", 00:14:26.593 "is_configured": true, 00:14:26.593 "data_offset": 2048, 00:14:26.593 "data_size": 63488 00:14:26.593 }, 00:14:26.593 { 00:14:26.593 "name": null, 00:14:26.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.593 "is_configured": false, 00:14:26.593 "data_offset": 0, 00:14:26.593 "data_size": 63488 00:14:26.593 }, 00:14:26.593 { 00:14:26.593 "name": "BaseBdev3", 00:14:26.593 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:26.593 "is_configured": true, 00:14:26.593 "data_offset": 2048, 00:14:26.593 "data_size": 63488 00:14:26.593 }, 00:14:26.593 { 00:14:26.593 "name": "BaseBdev4", 00:14:26.593 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:26.593 "is_configured": true, 00:14:26.593 "data_offset": 2048, 00:14:26.593 "data_size": 63488 00:14:26.593 } 00:14:26.593 ] 00:14:26.593 }' 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.593 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=489 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.852 "name": "raid_bdev1", 00:14:26.852 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:26.852 "strip_size_kb": 0, 00:14:26.852 "state": "online", 00:14:26.852 "raid_level": "raid1", 00:14:26.852 "superblock": true, 00:14:26.852 "num_base_bdevs": 4, 00:14:26.852 "num_base_bdevs_discovered": 3, 00:14:26.852 "num_base_bdevs_operational": 3, 00:14:26.852 "process": { 00:14:26.852 "type": "rebuild", 00:14:26.852 "target": "spare", 00:14:26.852 "progress": { 00:14:26.852 "blocks": 26624, 00:14:26.852 "percent": 41 00:14:26.852 } 00:14:26.852 }, 00:14:26.852 "base_bdevs_list": [ 00:14:26.852 { 00:14:26.852 "name": "spare", 00:14:26.852 "uuid": "e7420ced-446b-5380-83aa-d513a4157458", 00:14:26.852 "is_configured": true, 00:14:26.852 "data_offset": 2048, 00:14:26.852 "data_size": 63488 00:14:26.852 }, 00:14:26.852 { 00:14:26.852 "name": null, 00:14:26.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.852 "is_configured": false, 00:14:26.852 "data_offset": 0, 00:14:26.852 "data_size": 63488 00:14:26.852 }, 00:14:26.852 { 00:14:26.852 "name": "BaseBdev3", 00:14:26.852 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:26.852 "is_configured": true, 00:14:26.852 "data_offset": 2048, 00:14:26.852 "data_size": 63488 00:14:26.852 }, 00:14:26.852 { 00:14:26.852 "name": "BaseBdev4", 00:14:26.852 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:26.852 "is_configured": true, 00:14:26.852 "data_offset": 2048, 00:14:26.852 "data_size": 63488 00:14:26.852 } 00:14:26.852 ] 00:14:26.852 }' 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.852 17:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.788 17:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.788 17:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.788 17:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.788 17:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.788 17:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.788 17:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.788 17:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.788 17:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.788 17:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.788 17:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.788 17:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.046 17:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.046 "name": "raid_bdev1", 00:14:28.046 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:28.046 "strip_size_kb": 0, 00:14:28.046 "state": "online", 00:14:28.046 "raid_level": "raid1", 00:14:28.046 "superblock": true, 00:14:28.046 "num_base_bdevs": 4, 00:14:28.046 "num_base_bdevs_discovered": 3, 00:14:28.046 "num_base_bdevs_operational": 3, 00:14:28.046 "process": { 00:14:28.046 "type": "rebuild", 00:14:28.046 "target": "spare", 00:14:28.046 "progress": { 00:14:28.046 "blocks": 51200, 00:14:28.046 "percent": 80 00:14:28.046 } 00:14:28.046 }, 00:14:28.046 "base_bdevs_list": [ 00:14:28.046 { 00:14:28.046 "name": "spare", 00:14:28.046 "uuid": "e7420ced-446b-5380-83aa-d513a4157458", 00:14:28.046 "is_configured": true, 00:14:28.046 "data_offset": 2048, 00:14:28.046 "data_size": 63488 00:14:28.046 }, 00:14:28.046 { 00:14:28.046 "name": null, 00:14:28.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.046 "is_configured": false, 00:14:28.046 "data_offset": 0, 00:14:28.046 "data_size": 63488 00:14:28.046 }, 00:14:28.046 { 00:14:28.046 "name": "BaseBdev3", 00:14:28.046 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:28.046 "is_configured": true, 00:14:28.046 "data_offset": 2048, 00:14:28.046 "data_size": 63488 00:14:28.046 }, 00:14:28.046 { 00:14:28.046 "name": "BaseBdev4", 00:14:28.046 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:28.046 "is_configured": true, 00:14:28.046 "data_offset": 2048, 00:14:28.046 "data_size": 63488 00:14:28.046 } 00:14:28.046 ] 00:14:28.046 }' 00:14:28.046 17:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.046 17:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.046 17:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.046 17:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.046 17:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:28.613 [2024-11-26 17:59:10.225438] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:28.613 [2024-11-26 17:59:10.225545] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:28.613 [2024-11-26 17:59:10.225707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.180 "name": "raid_bdev1", 00:14:29.180 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:29.180 "strip_size_kb": 0, 00:14:29.180 "state": "online", 00:14:29.180 "raid_level": "raid1", 00:14:29.180 "superblock": true, 00:14:29.180 "num_base_bdevs": 4, 00:14:29.180 "num_base_bdevs_discovered": 3, 00:14:29.180 "num_base_bdevs_operational": 3, 00:14:29.180 "base_bdevs_list": [ 00:14:29.180 { 00:14:29.180 "name": "spare", 00:14:29.180 "uuid": "e7420ced-446b-5380-83aa-d513a4157458", 00:14:29.180 "is_configured": true, 00:14:29.180 "data_offset": 2048, 00:14:29.180 "data_size": 63488 00:14:29.180 }, 00:14:29.180 { 00:14:29.180 "name": null, 00:14:29.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.180 "is_configured": false, 00:14:29.180 "data_offset": 0, 00:14:29.180 "data_size": 63488 00:14:29.180 }, 00:14:29.180 { 00:14:29.180 "name": "BaseBdev3", 00:14:29.180 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:29.180 "is_configured": true, 00:14:29.180 "data_offset": 2048, 00:14:29.180 "data_size": 63488 00:14:29.180 }, 00:14:29.180 { 00:14:29.180 "name": "BaseBdev4", 00:14:29.180 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:29.180 "is_configured": true, 00:14:29.180 "data_offset": 2048, 00:14:29.180 "data_size": 63488 00:14:29.180 } 00:14:29.180 ] 00:14:29.180 }' 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.180 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.180 "name": "raid_bdev1", 00:14:29.180 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:29.180 "strip_size_kb": 0, 00:14:29.180 "state": "online", 00:14:29.180 "raid_level": "raid1", 00:14:29.180 "superblock": true, 00:14:29.180 "num_base_bdevs": 4, 00:14:29.180 "num_base_bdevs_discovered": 3, 00:14:29.180 "num_base_bdevs_operational": 3, 00:14:29.180 "base_bdevs_list": [ 00:14:29.180 { 00:14:29.180 "name": "spare", 00:14:29.180 "uuid": "e7420ced-446b-5380-83aa-d513a4157458", 00:14:29.180 "is_configured": true, 00:14:29.180 "data_offset": 2048, 00:14:29.180 "data_size": 63488 00:14:29.180 }, 00:14:29.180 { 00:14:29.180 "name": null, 00:14:29.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.180 "is_configured": false, 00:14:29.180 "data_offset": 0, 00:14:29.180 "data_size": 63488 00:14:29.180 }, 00:14:29.180 { 00:14:29.180 "name": "BaseBdev3", 00:14:29.181 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:29.181 "is_configured": true, 00:14:29.181 "data_offset": 2048, 00:14:29.181 "data_size": 63488 00:14:29.181 }, 00:14:29.181 { 00:14:29.181 "name": "BaseBdev4", 00:14:29.181 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:29.181 "is_configured": true, 00:14:29.181 "data_offset": 2048, 00:14:29.181 "data_size": 63488 00:14:29.181 } 00:14:29.181 ] 00:14:29.181 }' 00:14:29.181 17:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.181 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:29.181 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.445 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.445 "name": "raid_bdev1", 00:14:29.445 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:29.445 "strip_size_kb": 0, 00:14:29.445 "state": "online", 00:14:29.445 "raid_level": "raid1", 00:14:29.446 "superblock": true, 00:14:29.446 "num_base_bdevs": 4, 00:14:29.446 "num_base_bdevs_discovered": 3, 00:14:29.446 "num_base_bdevs_operational": 3, 00:14:29.446 "base_bdevs_list": [ 00:14:29.446 { 00:14:29.446 "name": "spare", 00:14:29.446 "uuid": "e7420ced-446b-5380-83aa-d513a4157458", 00:14:29.446 "is_configured": true, 00:14:29.446 "data_offset": 2048, 00:14:29.446 "data_size": 63488 00:14:29.446 }, 00:14:29.446 { 00:14:29.446 "name": null, 00:14:29.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.446 "is_configured": false, 00:14:29.446 "data_offset": 0, 00:14:29.446 "data_size": 63488 00:14:29.446 }, 00:14:29.446 { 00:14:29.446 "name": "BaseBdev3", 00:14:29.446 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:29.446 "is_configured": true, 00:14:29.446 "data_offset": 2048, 00:14:29.446 "data_size": 63488 00:14:29.446 }, 00:14:29.446 { 00:14:29.446 "name": "BaseBdev4", 00:14:29.446 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:29.446 "is_configured": true, 00:14:29.446 "data_offset": 2048, 00:14:29.446 "data_size": 63488 00:14:29.446 } 00:14:29.446 ] 00:14:29.446 }' 00:14:29.446 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.446 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.715 [2024-11-26 17:59:11.452161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:29.715 [2024-11-26 17:59:11.452207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.715 [2024-11-26 17:59:11.452319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.715 [2024-11-26 17:59:11.452416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.715 [2024-11-26 17:59:11.452436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:29.715 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:29.716 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:29.716 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:29.716 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:29.975 /dev/nbd0 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.975 1+0 records in 00:14:29.975 1+0 records out 00:14:29.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441488 s, 9.3 MB/s 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:29.975 17:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:30.234 /dev/nbd1 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:30.234 1+0 records in 00:14:30.234 1+0 records out 00:14:30.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465576 s, 8.8 MB/s 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:30.234 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:30.494 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:30.494 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:30.494 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:30.494 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:30.494 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:30.494 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.494 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:30.753 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:30.753 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:30.753 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:30.753 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.753 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.753 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:30.753 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:30.753 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.753 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.753 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.012 [2024-11-26 17:59:12.830093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:31.012 [2024-11-26 17:59:12.830173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.012 [2024-11-26 17:59:12.830200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:31.012 [2024-11-26 17:59:12.830211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.012 [2024-11-26 17:59:12.832821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.012 [2024-11-26 17:59:12.832871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:31.012 [2024-11-26 17:59:12.832995] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:31.012 [2024-11-26 17:59:12.833080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:31.012 [2024-11-26 17:59:12.833247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.012 [2024-11-26 17:59:12.833370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:31.012 spare 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.012 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.272 [2024-11-26 17:59:12.933298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:31.272 [2024-11-26 17:59:12.933368] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:31.272 [2024-11-26 17:59:12.933812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:31.272 [2024-11-26 17:59:12.934082] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:31.272 [2024-11-26 17:59:12.934099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:31.272 [2024-11-26 17:59:12.934349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.272 "name": "raid_bdev1", 00:14:31.272 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:31.272 "strip_size_kb": 0, 00:14:31.272 "state": "online", 00:14:31.272 "raid_level": "raid1", 00:14:31.272 "superblock": true, 00:14:31.272 "num_base_bdevs": 4, 00:14:31.272 "num_base_bdevs_discovered": 3, 00:14:31.272 "num_base_bdevs_operational": 3, 00:14:31.272 "base_bdevs_list": [ 00:14:31.272 { 00:14:31.272 "name": "spare", 00:14:31.272 "uuid": "e7420ced-446b-5380-83aa-d513a4157458", 00:14:31.272 "is_configured": true, 00:14:31.272 "data_offset": 2048, 00:14:31.272 "data_size": 63488 00:14:31.272 }, 00:14:31.272 { 00:14:31.272 "name": null, 00:14:31.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.272 "is_configured": false, 00:14:31.272 "data_offset": 2048, 00:14:31.272 "data_size": 63488 00:14:31.272 }, 00:14:31.272 { 00:14:31.272 "name": "BaseBdev3", 00:14:31.272 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:31.272 "is_configured": true, 00:14:31.272 "data_offset": 2048, 00:14:31.272 "data_size": 63488 00:14:31.272 }, 00:14:31.272 { 00:14:31.272 "name": "BaseBdev4", 00:14:31.272 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:31.272 "is_configured": true, 00:14:31.272 "data_offset": 2048, 00:14:31.272 "data_size": 63488 00:14:31.272 } 00:14:31.272 ] 00:14:31.272 }' 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.272 17:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.839 "name": "raid_bdev1", 00:14:31.839 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:31.839 "strip_size_kb": 0, 00:14:31.839 "state": "online", 00:14:31.839 "raid_level": "raid1", 00:14:31.839 "superblock": true, 00:14:31.839 "num_base_bdevs": 4, 00:14:31.839 "num_base_bdevs_discovered": 3, 00:14:31.839 "num_base_bdevs_operational": 3, 00:14:31.839 "base_bdevs_list": [ 00:14:31.839 { 00:14:31.839 "name": "spare", 00:14:31.839 "uuid": "e7420ced-446b-5380-83aa-d513a4157458", 00:14:31.839 "is_configured": true, 00:14:31.839 "data_offset": 2048, 00:14:31.839 "data_size": 63488 00:14:31.839 }, 00:14:31.839 { 00:14:31.839 "name": null, 00:14:31.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.839 "is_configured": false, 00:14:31.839 "data_offset": 2048, 00:14:31.839 "data_size": 63488 00:14:31.839 }, 00:14:31.839 { 00:14:31.839 "name": "BaseBdev3", 00:14:31.839 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:31.839 "is_configured": true, 00:14:31.839 "data_offset": 2048, 00:14:31.839 "data_size": 63488 00:14:31.839 }, 00:14:31.839 { 00:14:31.839 "name": "BaseBdev4", 00:14:31.839 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:31.839 "is_configured": true, 00:14:31.839 "data_offset": 2048, 00:14:31.839 "data_size": 63488 00:14:31.839 } 00:14:31.839 ] 00:14:31.839 }' 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.839 [2024-11-26 17:59:13.589540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.839 "name": "raid_bdev1", 00:14:31.839 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:31.839 "strip_size_kb": 0, 00:14:31.839 "state": "online", 00:14:31.839 "raid_level": "raid1", 00:14:31.839 "superblock": true, 00:14:31.839 "num_base_bdevs": 4, 00:14:31.839 "num_base_bdevs_discovered": 2, 00:14:31.839 "num_base_bdevs_operational": 2, 00:14:31.839 "base_bdevs_list": [ 00:14:31.839 { 00:14:31.839 "name": null, 00:14:31.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.839 "is_configured": false, 00:14:31.839 "data_offset": 0, 00:14:31.839 "data_size": 63488 00:14:31.839 }, 00:14:31.839 { 00:14:31.839 "name": null, 00:14:31.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.839 "is_configured": false, 00:14:31.839 "data_offset": 2048, 00:14:31.839 "data_size": 63488 00:14:31.839 }, 00:14:31.839 { 00:14:31.839 "name": "BaseBdev3", 00:14:31.839 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:31.839 "is_configured": true, 00:14:31.839 "data_offset": 2048, 00:14:31.839 "data_size": 63488 00:14:31.839 }, 00:14:31.839 { 00:14:31.839 "name": "BaseBdev4", 00:14:31.839 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:31.839 "is_configured": true, 00:14:31.839 "data_offset": 2048, 00:14:31.839 "data_size": 63488 00:14:31.839 } 00:14:31.839 ] 00:14:31.839 }' 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.839 17:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.406 17:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:32.406 17:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.406 17:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.406 [2024-11-26 17:59:14.089549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.406 [2024-11-26 17:59:14.089805] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:32.406 [2024-11-26 17:59:14.089835] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:32.406 [2024-11-26 17:59:14.089879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.406 [2024-11-26 17:59:14.106988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:32.406 17:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.406 17:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:32.406 [2024-11-26 17:59:14.109295] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:33.345 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.345 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.345 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.345 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.345 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.345 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.345 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.345 17:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.345 17:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.345 17:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.345 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.345 "name": "raid_bdev1", 00:14:33.345 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:33.345 "strip_size_kb": 0, 00:14:33.345 "state": "online", 00:14:33.345 "raid_level": "raid1", 00:14:33.345 "superblock": true, 00:14:33.345 "num_base_bdevs": 4, 00:14:33.345 "num_base_bdevs_discovered": 3, 00:14:33.345 "num_base_bdevs_operational": 3, 00:14:33.345 "process": { 00:14:33.345 "type": "rebuild", 00:14:33.345 "target": "spare", 00:14:33.345 "progress": { 00:14:33.345 "blocks": 20480, 00:14:33.345 "percent": 32 00:14:33.345 } 00:14:33.345 }, 00:14:33.345 "base_bdevs_list": [ 00:14:33.345 { 00:14:33.345 "name": "spare", 00:14:33.345 "uuid": "e7420ced-446b-5380-83aa-d513a4157458", 00:14:33.345 "is_configured": true, 00:14:33.345 "data_offset": 2048, 00:14:33.345 "data_size": 63488 00:14:33.345 }, 00:14:33.345 { 00:14:33.345 "name": null, 00:14:33.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.345 "is_configured": false, 00:14:33.345 "data_offset": 2048, 00:14:33.345 "data_size": 63488 00:14:33.345 }, 00:14:33.345 { 00:14:33.345 "name": "BaseBdev3", 00:14:33.345 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:33.345 "is_configured": true, 00:14:33.345 "data_offset": 2048, 00:14:33.345 "data_size": 63488 00:14:33.345 }, 00:14:33.345 { 00:14:33.346 "name": "BaseBdev4", 00:14:33.346 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:33.346 "is_configured": true, 00:14:33.346 "data_offset": 2048, 00:14:33.346 "data_size": 63488 00:14:33.346 } 00:14:33.346 ] 00:14:33.346 }' 00:14:33.346 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.605 [2024-11-26 17:59:15.257630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.605 [2024-11-26 17:59:15.315798] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:33.605 [2024-11-26 17:59:15.315895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.605 [2024-11-26 17:59:15.315917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.605 [2024-11-26 17:59:15.315926] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.605 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.605 "name": "raid_bdev1", 00:14:33.605 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:33.605 "strip_size_kb": 0, 00:14:33.605 "state": "online", 00:14:33.605 "raid_level": "raid1", 00:14:33.605 "superblock": true, 00:14:33.605 "num_base_bdevs": 4, 00:14:33.605 "num_base_bdevs_discovered": 2, 00:14:33.605 "num_base_bdevs_operational": 2, 00:14:33.605 "base_bdevs_list": [ 00:14:33.605 { 00:14:33.605 "name": null, 00:14:33.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.605 "is_configured": false, 00:14:33.605 "data_offset": 0, 00:14:33.605 "data_size": 63488 00:14:33.605 }, 00:14:33.605 { 00:14:33.605 "name": null, 00:14:33.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.605 "is_configured": false, 00:14:33.605 "data_offset": 2048, 00:14:33.605 "data_size": 63488 00:14:33.606 }, 00:14:33.606 { 00:14:33.606 "name": "BaseBdev3", 00:14:33.606 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:33.606 "is_configured": true, 00:14:33.606 "data_offset": 2048, 00:14:33.606 "data_size": 63488 00:14:33.606 }, 00:14:33.606 { 00:14:33.606 "name": "BaseBdev4", 00:14:33.606 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:33.606 "is_configured": true, 00:14:33.606 "data_offset": 2048, 00:14:33.606 "data_size": 63488 00:14:33.606 } 00:14:33.606 ] 00:14:33.606 }' 00:14:33.606 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.606 17:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.174 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:34.174 17:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.174 17:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.174 [2024-11-26 17:59:15.790224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:34.174 [2024-11-26 17:59:15.790320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.174 [2024-11-26 17:59:15.790361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:34.174 [2024-11-26 17:59:15.790372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.174 [2024-11-26 17:59:15.790951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.174 [2024-11-26 17:59:15.790979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:34.174 [2024-11-26 17:59:15.791111] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:34.174 [2024-11-26 17:59:15.791127] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:34.174 [2024-11-26 17:59:15.791143] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:34.174 [2024-11-26 17:59:15.791170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.174 [2024-11-26 17:59:15.809011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:34.174 spare 00:14:34.174 17:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.174 17:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:34.174 [2024-11-26 17:59:15.811322] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.111 "name": "raid_bdev1", 00:14:35.111 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:35.111 "strip_size_kb": 0, 00:14:35.111 "state": "online", 00:14:35.111 "raid_level": "raid1", 00:14:35.111 "superblock": true, 00:14:35.111 "num_base_bdevs": 4, 00:14:35.111 "num_base_bdevs_discovered": 3, 00:14:35.111 "num_base_bdevs_operational": 3, 00:14:35.111 "process": { 00:14:35.111 "type": "rebuild", 00:14:35.111 "target": "spare", 00:14:35.111 "progress": { 00:14:35.111 "blocks": 20480, 00:14:35.111 "percent": 32 00:14:35.111 } 00:14:35.111 }, 00:14:35.111 "base_bdevs_list": [ 00:14:35.111 { 00:14:35.111 "name": "spare", 00:14:35.111 "uuid": "e7420ced-446b-5380-83aa-d513a4157458", 00:14:35.111 "is_configured": true, 00:14:35.111 "data_offset": 2048, 00:14:35.111 "data_size": 63488 00:14:35.111 }, 00:14:35.111 { 00:14:35.111 "name": null, 00:14:35.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.111 "is_configured": false, 00:14:35.111 "data_offset": 2048, 00:14:35.111 "data_size": 63488 00:14:35.111 }, 00:14:35.111 { 00:14:35.111 "name": "BaseBdev3", 00:14:35.111 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:35.111 "is_configured": true, 00:14:35.111 "data_offset": 2048, 00:14:35.111 "data_size": 63488 00:14:35.111 }, 00:14:35.111 { 00:14:35.111 "name": "BaseBdev4", 00:14:35.111 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:35.111 "is_configured": true, 00:14:35.111 "data_offset": 2048, 00:14:35.111 "data_size": 63488 00:14:35.111 } 00:14:35.111 ] 00:14:35.111 }' 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.111 17:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.371 [2024-11-26 17:59:16.978140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.371 [2024-11-26 17:59:17.017974] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:35.371 [2024-11-26 17:59:17.018086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.371 [2024-11-26 17:59:17.018106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.371 [2024-11-26 17:59:17.018117] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.371 "name": "raid_bdev1", 00:14:35.371 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:35.371 "strip_size_kb": 0, 00:14:35.371 "state": "online", 00:14:35.371 "raid_level": "raid1", 00:14:35.371 "superblock": true, 00:14:35.371 "num_base_bdevs": 4, 00:14:35.371 "num_base_bdevs_discovered": 2, 00:14:35.371 "num_base_bdevs_operational": 2, 00:14:35.371 "base_bdevs_list": [ 00:14:35.371 { 00:14:35.371 "name": null, 00:14:35.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.371 "is_configured": false, 00:14:35.371 "data_offset": 0, 00:14:35.371 "data_size": 63488 00:14:35.371 }, 00:14:35.371 { 00:14:35.371 "name": null, 00:14:35.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.371 "is_configured": false, 00:14:35.371 "data_offset": 2048, 00:14:35.371 "data_size": 63488 00:14:35.371 }, 00:14:35.371 { 00:14:35.371 "name": "BaseBdev3", 00:14:35.371 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:35.371 "is_configured": true, 00:14:35.371 "data_offset": 2048, 00:14:35.371 "data_size": 63488 00:14:35.371 }, 00:14:35.371 { 00:14:35.371 "name": "BaseBdev4", 00:14:35.371 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:35.371 "is_configured": true, 00:14:35.371 "data_offset": 2048, 00:14:35.371 "data_size": 63488 00:14:35.371 } 00:14:35.371 ] 00:14:35.371 }' 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.371 17:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.940 "name": "raid_bdev1", 00:14:35.940 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:35.940 "strip_size_kb": 0, 00:14:35.940 "state": "online", 00:14:35.940 "raid_level": "raid1", 00:14:35.940 "superblock": true, 00:14:35.940 "num_base_bdevs": 4, 00:14:35.940 "num_base_bdevs_discovered": 2, 00:14:35.940 "num_base_bdevs_operational": 2, 00:14:35.940 "base_bdevs_list": [ 00:14:35.940 { 00:14:35.940 "name": null, 00:14:35.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.940 "is_configured": false, 00:14:35.940 "data_offset": 0, 00:14:35.940 "data_size": 63488 00:14:35.940 }, 00:14:35.940 { 00:14:35.940 "name": null, 00:14:35.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.940 "is_configured": false, 00:14:35.940 "data_offset": 2048, 00:14:35.940 "data_size": 63488 00:14:35.940 }, 00:14:35.940 { 00:14:35.940 "name": "BaseBdev3", 00:14:35.940 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:35.940 "is_configured": true, 00:14:35.940 "data_offset": 2048, 00:14:35.940 "data_size": 63488 00:14:35.940 }, 00:14:35.940 { 00:14:35.940 "name": "BaseBdev4", 00:14:35.940 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:35.940 "is_configured": true, 00:14:35.940 "data_offset": 2048, 00:14:35.940 "data_size": 63488 00:14:35.940 } 00:14:35.940 ] 00:14:35.940 }' 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.940 [2024-11-26 17:59:17.690115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:35.940 [2024-11-26 17:59:17.690204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.940 [2024-11-26 17:59:17.690231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:35.940 [2024-11-26 17:59:17.690244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.940 [2024-11-26 17:59:17.690813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.940 [2024-11-26 17:59:17.690850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:35.940 [2024-11-26 17:59:17.690957] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:35.940 [2024-11-26 17:59:17.690979] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:35.940 [2024-11-26 17:59:17.690988] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:35.940 [2024-11-26 17:59:17.691029] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:35.940 BaseBdev1 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.940 17:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:36.875 17:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:36.875 17:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.875 17:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.875 17:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.875 17:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.875 17:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.875 17:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.875 17:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.875 17:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.875 17:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.875 17:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.875 17:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.875 17:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.875 17:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.875 17:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.133 17:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.133 "name": "raid_bdev1", 00:14:37.133 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:37.133 "strip_size_kb": 0, 00:14:37.133 "state": "online", 00:14:37.133 "raid_level": "raid1", 00:14:37.133 "superblock": true, 00:14:37.133 "num_base_bdevs": 4, 00:14:37.133 "num_base_bdevs_discovered": 2, 00:14:37.133 "num_base_bdevs_operational": 2, 00:14:37.133 "base_bdevs_list": [ 00:14:37.133 { 00:14:37.133 "name": null, 00:14:37.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.133 "is_configured": false, 00:14:37.133 "data_offset": 0, 00:14:37.133 "data_size": 63488 00:14:37.133 }, 00:14:37.133 { 00:14:37.133 "name": null, 00:14:37.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.133 "is_configured": false, 00:14:37.133 "data_offset": 2048, 00:14:37.133 "data_size": 63488 00:14:37.133 }, 00:14:37.133 { 00:14:37.133 "name": "BaseBdev3", 00:14:37.133 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:37.133 "is_configured": true, 00:14:37.133 "data_offset": 2048, 00:14:37.133 "data_size": 63488 00:14:37.133 }, 00:14:37.133 { 00:14:37.133 "name": "BaseBdev4", 00:14:37.133 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:37.133 "is_configured": true, 00:14:37.133 "data_offset": 2048, 00:14:37.133 "data_size": 63488 00:14:37.133 } 00:14:37.133 ] 00:14:37.133 }' 00:14:37.133 17:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.133 17:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.391 17:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.391 17:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.391 17:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.391 17:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.391 17:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.391 17:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.391 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.391 17:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.391 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.391 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.391 17:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.391 "name": "raid_bdev1", 00:14:37.391 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:37.391 "strip_size_kb": 0, 00:14:37.391 "state": "online", 00:14:37.391 "raid_level": "raid1", 00:14:37.391 "superblock": true, 00:14:37.391 "num_base_bdevs": 4, 00:14:37.391 "num_base_bdevs_discovered": 2, 00:14:37.391 "num_base_bdevs_operational": 2, 00:14:37.391 "base_bdevs_list": [ 00:14:37.391 { 00:14:37.391 "name": null, 00:14:37.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.391 "is_configured": false, 00:14:37.391 "data_offset": 0, 00:14:37.391 "data_size": 63488 00:14:37.391 }, 00:14:37.391 { 00:14:37.391 "name": null, 00:14:37.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.391 "is_configured": false, 00:14:37.391 "data_offset": 2048, 00:14:37.391 "data_size": 63488 00:14:37.391 }, 00:14:37.391 { 00:14:37.391 "name": "BaseBdev3", 00:14:37.391 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:37.391 "is_configured": true, 00:14:37.391 "data_offset": 2048, 00:14:37.391 "data_size": 63488 00:14:37.391 }, 00:14:37.391 { 00:14:37.391 "name": "BaseBdev4", 00:14:37.391 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:37.391 "is_configured": true, 00:14:37.391 "data_offset": 2048, 00:14:37.391 "data_size": 63488 00:14:37.391 } 00:14:37.391 ] 00:14:37.391 }' 00:14:37.391 17:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.391 17:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.391 17:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.649 [2024-11-26 17:59:19.308179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.649 [2024-11-26 17:59:19.308436] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:37.649 [2024-11-26 17:59:19.308463] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:37.649 request: 00:14:37.649 { 00:14:37.649 "base_bdev": "BaseBdev1", 00:14:37.649 "raid_bdev": "raid_bdev1", 00:14:37.649 "method": "bdev_raid_add_base_bdev", 00:14:37.649 "req_id": 1 00:14:37.649 } 00:14:37.649 Got JSON-RPC error response 00:14:37.649 response: 00:14:37.649 { 00:14:37.649 "code": -22, 00:14:37.649 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:37.649 } 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:37.649 17:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:38.585 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:38.585 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.585 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.585 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.585 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.585 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.585 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.585 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.585 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.585 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.585 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.585 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.585 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.586 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.586 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.586 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.586 "name": "raid_bdev1", 00:14:38.586 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:38.586 "strip_size_kb": 0, 00:14:38.586 "state": "online", 00:14:38.586 "raid_level": "raid1", 00:14:38.586 "superblock": true, 00:14:38.586 "num_base_bdevs": 4, 00:14:38.586 "num_base_bdevs_discovered": 2, 00:14:38.586 "num_base_bdevs_operational": 2, 00:14:38.586 "base_bdevs_list": [ 00:14:38.586 { 00:14:38.586 "name": null, 00:14:38.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.586 "is_configured": false, 00:14:38.586 "data_offset": 0, 00:14:38.586 "data_size": 63488 00:14:38.586 }, 00:14:38.586 { 00:14:38.586 "name": null, 00:14:38.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.586 "is_configured": false, 00:14:38.586 "data_offset": 2048, 00:14:38.586 "data_size": 63488 00:14:38.586 }, 00:14:38.586 { 00:14:38.586 "name": "BaseBdev3", 00:14:38.586 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:38.586 "is_configured": true, 00:14:38.586 "data_offset": 2048, 00:14:38.586 "data_size": 63488 00:14:38.586 }, 00:14:38.586 { 00:14:38.586 "name": "BaseBdev4", 00:14:38.586 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:38.586 "is_configured": true, 00:14:38.586 "data_offset": 2048, 00:14:38.586 "data_size": 63488 00:14:38.586 } 00:14:38.586 ] 00:14:38.586 }' 00:14:38.586 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.586 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.155 "name": "raid_bdev1", 00:14:39.155 "uuid": "a020eb87-07d6-4989-973a-f7b763c477c7", 00:14:39.155 "strip_size_kb": 0, 00:14:39.155 "state": "online", 00:14:39.155 "raid_level": "raid1", 00:14:39.155 "superblock": true, 00:14:39.155 "num_base_bdevs": 4, 00:14:39.155 "num_base_bdevs_discovered": 2, 00:14:39.155 "num_base_bdevs_operational": 2, 00:14:39.155 "base_bdevs_list": [ 00:14:39.155 { 00:14:39.155 "name": null, 00:14:39.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.155 "is_configured": false, 00:14:39.155 "data_offset": 0, 00:14:39.155 "data_size": 63488 00:14:39.155 }, 00:14:39.155 { 00:14:39.155 "name": null, 00:14:39.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.155 "is_configured": false, 00:14:39.155 "data_offset": 2048, 00:14:39.155 "data_size": 63488 00:14:39.155 }, 00:14:39.155 { 00:14:39.155 "name": "BaseBdev3", 00:14:39.155 "uuid": "12f56970-d459-5109-bf12-0be04fdc7f1d", 00:14:39.155 "is_configured": true, 00:14:39.155 "data_offset": 2048, 00:14:39.155 "data_size": 63488 00:14:39.155 }, 00:14:39.155 { 00:14:39.155 "name": "BaseBdev4", 00:14:39.155 "uuid": "97e0db4f-cff9-56ef-91ab-321742a86d92", 00:14:39.155 "is_configured": true, 00:14:39.155 "data_offset": 2048, 00:14:39.155 "data_size": 63488 00:14:39.155 } 00:14:39.155 ] 00:14:39.155 }' 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78351 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78351 ']' 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78351 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78351 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:39.155 killing process with pid 78351 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78351' 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78351 00:14:39.155 Received shutdown signal, test time was about 60.000000 seconds 00:14:39.155 00:14:39.155 Latency(us) 00:14:39.155 [2024-11-26T17:59:21.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.155 [2024-11-26T17:59:21.018Z] =================================================================================================================== 00:14:39.155 [2024-11-26T17:59:21.018Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:39.155 17:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78351 00:14:39.155 [2024-11-26 17:59:20.993259] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.155 [2024-11-26 17:59:20.993438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.155 [2024-11-26 17:59:20.993535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.155 [2024-11-26 17:59:20.993549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:39.741 [2024-11-26 17:59:21.589210] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:41.119 17:59:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:41.119 00:14:41.119 real 0m26.833s 00:14:41.119 user 0m32.408s 00:14:41.119 sys 0m3.922s 00:14:41.119 17:59:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.119 17:59:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.119 ************************************ 00:14:41.119 END TEST raid_rebuild_test_sb 00:14:41.119 ************************************ 00:14:41.119 17:59:22 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:41.119 17:59:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:41.119 17:59:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.119 17:59:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:41.378 ************************************ 00:14:41.378 START TEST raid_rebuild_test_io 00:14:41.378 ************************************ 00:14:41.378 17:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:41.378 17:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:41.378 17:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:41.378 17:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:41.378 17:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:41.378 17:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:41.378 17:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79123 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79123 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79123 ']' 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:41.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:41.378 17:59:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.378 [2024-11-26 17:59:23.102105] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:14:41.378 [2024-11-26 17:59:23.102247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79123 ] 00:14:41.378 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:41.378 Zero copy mechanism will not be used. 00:14:41.637 [2024-11-26 17:59:23.281508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.637 [2024-11-26 17:59:23.420338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.895 [2024-11-26 17:59:23.656514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.895 [2024-11-26 17:59:23.656595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.155 17:59:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.155 17:59:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:42.155 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:42.155 17:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:42.155 17:59:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.155 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.414 BaseBdev1_malloc 00:14:42.414 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.414 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:42.414 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.414 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.414 [2024-11-26 17:59:24.058312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:42.414 [2024-11-26 17:59:24.058391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.414 [2024-11-26 17:59:24.058432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:42.414 [2024-11-26 17:59:24.058446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.414 [2024-11-26 17:59:24.060971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.414 [2024-11-26 17:59:24.061041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:42.414 BaseBdev1 00:14:42.414 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.414 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:42.414 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:42.414 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.414 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.414 BaseBdev2_malloc 00:14:42.414 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.415 [2024-11-26 17:59:24.119762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:42.415 [2024-11-26 17:59:24.119839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.415 [2024-11-26 17:59:24.119866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:42.415 [2024-11-26 17:59:24.119878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.415 [2024-11-26 17:59:24.122386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.415 [2024-11-26 17:59:24.122435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:42.415 BaseBdev2 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.415 BaseBdev3_malloc 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.415 [2024-11-26 17:59:24.194821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:42.415 [2024-11-26 17:59:24.194895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.415 [2024-11-26 17:59:24.194922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:42.415 [2024-11-26 17:59:24.194936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.415 [2024-11-26 17:59:24.197505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.415 [2024-11-26 17:59:24.197554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:42.415 BaseBdev3 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.415 BaseBdev4_malloc 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.415 [2024-11-26 17:59:24.256628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:42.415 [2024-11-26 17:59:24.256711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.415 [2024-11-26 17:59:24.256745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:42.415 [2024-11-26 17:59:24.256757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.415 [2024-11-26 17:59:24.259304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.415 [2024-11-26 17:59:24.259353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:42.415 BaseBdev4 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.415 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.674 spare_malloc 00:14:42.674 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.674 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:42.674 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.674 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.674 spare_delay 00:14:42.674 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.674 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:42.674 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.674 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.674 [2024-11-26 17:59:24.330180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:42.674 [2024-11-26 17:59:24.330252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.674 [2024-11-26 17:59:24.330281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:42.674 [2024-11-26 17:59:24.330294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.674 [2024-11-26 17:59:24.332758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.674 [2024-11-26 17:59:24.332804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:42.674 spare 00:14:42.674 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.674 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:42.674 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.674 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.674 [2024-11-26 17:59:24.342219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.674 [2024-11-26 17:59:24.344381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:42.674 [2024-11-26 17:59:24.344462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:42.674 [2024-11-26 17:59:24.344525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:42.674 [2024-11-26 17:59:24.344636] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:42.674 [2024-11-26 17:59:24.344660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:42.674 [2024-11-26 17:59:24.344997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:42.674 [2024-11-26 17:59:24.345249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:42.674 [2024-11-26 17:59:24.345273] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:42.675 [2024-11-26 17:59:24.345509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.675 "name": "raid_bdev1", 00:14:42.675 "uuid": "6b1c682e-8e34-4bf9-90f3-328ea20e2351", 00:14:42.675 "strip_size_kb": 0, 00:14:42.675 "state": "online", 00:14:42.675 "raid_level": "raid1", 00:14:42.675 "superblock": false, 00:14:42.675 "num_base_bdevs": 4, 00:14:42.675 "num_base_bdevs_discovered": 4, 00:14:42.675 "num_base_bdevs_operational": 4, 00:14:42.675 "base_bdevs_list": [ 00:14:42.675 { 00:14:42.675 "name": "BaseBdev1", 00:14:42.675 "uuid": "2e30f1eb-4cf0-5587-8494-dc5f0bf5c7f9", 00:14:42.675 "is_configured": true, 00:14:42.675 "data_offset": 0, 00:14:42.675 "data_size": 65536 00:14:42.675 }, 00:14:42.675 { 00:14:42.675 "name": "BaseBdev2", 00:14:42.675 "uuid": "6ac6f330-8c29-5cf0-92a7-90b5c0a6a1f9", 00:14:42.675 "is_configured": true, 00:14:42.675 "data_offset": 0, 00:14:42.675 "data_size": 65536 00:14:42.675 }, 00:14:42.675 { 00:14:42.675 "name": "BaseBdev3", 00:14:42.675 "uuid": "e99742e0-387b-5f56-95a6-da2950d4e3b3", 00:14:42.675 "is_configured": true, 00:14:42.675 "data_offset": 0, 00:14:42.675 "data_size": 65536 00:14:42.675 }, 00:14:42.675 { 00:14:42.675 "name": "BaseBdev4", 00:14:42.675 "uuid": "5fff1684-f883-5de7-9d13-16fa59b7a22c", 00:14:42.675 "is_configured": true, 00:14:42.675 "data_offset": 0, 00:14:42.675 "data_size": 65536 00:14:42.675 } 00:14:42.675 ] 00:14:42.675 }' 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.675 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.243 [2024-11-26 17:59:24.813913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.243 [2024-11-26 17:59:24.913323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.243 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.243 "name": "raid_bdev1", 00:14:43.243 "uuid": "6b1c682e-8e34-4bf9-90f3-328ea20e2351", 00:14:43.243 "strip_size_kb": 0, 00:14:43.243 "state": "online", 00:14:43.243 "raid_level": "raid1", 00:14:43.243 "superblock": false, 00:14:43.243 "num_base_bdevs": 4, 00:14:43.243 "num_base_bdevs_discovered": 3, 00:14:43.243 "num_base_bdevs_operational": 3, 00:14:43.243 "base_bdevs_list": [ 00:14:43.243 { 00:14:43.243 "name": null, 00:14:43.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.243 "is_configured": false, 00:14:43.243 "data_offset": 0, 00:14:43.243 "data_size": 65536 00:14:43.243 }, 00:14:43.243 { 00:14:43.243 "name": "BaseBdev2", 00:14:43.243 "uuid": "6ac6f330-8c29-5cf0-92a7-90b5c0a6a1f9", 00:14:43.243 "is_configured": true, 00:14:43.243 "data_offset": 0, 00:14:43.243 "data_size": 65536 00:14:43.243 }, 00:14:43.243 { 00:14:43.243 "name": "BaseBdev3", 00:14:43.243 "uuid": "e99742e0-387b-5f56-95a6-da2950d4e3b3", 00:14:43.243 "is_configured": true, 00:14:43.243 "data_offset": 0, 00:14:43.243 "data_size": 65536 00:14:43.243 }, 00:14:43.243 { 00:14:43.243 "name": "BaseBdev4", 00:14:43.244 "uuid": "5fff1684-f883-5de7-9d13-16fa59b7a22c", 00:14:43.244 "is_configured": true, 00:14:43.244 "data_offset": 0, 00:14:43.244 "data_size": 65536 00:14:43.244 } 00:14:43.244 ] 00:14:43.244 }' 00:14:43.244 17:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.244 17:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.244 [2024-11-26 17:59:25.022263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:43.244 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:43.244 Zero copy mechanism will not be used. 00:14:43.244 Running I/O for 60 seconds... 00:14:43.813 17:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:43.813 17:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.813 17:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.813 [2024-11-26 17:59:25.389068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:43.813 17:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.813 17:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:43.813 [2024-11-26 17:59:25.462387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:43.813 [2024-11-26 17:59:25.464639] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:43.813 [2024-11-26 17:59:25.589211] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:43.813 [2024-11-26 17:59:25.590867] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:44.072 [2024-11-26 17:59:25.808755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:44.072 [2024-11-26 17:59:25.809638] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:44.331 126.00 IOPS, 378.00 MiB/s [2024-11-26T17:59:26.194Z] [2024-11-26 17:59:26.184394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:44.331 [2024-11-26 17:59:26.185076] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:44.591 [2024-11-26 17:59:26.397988] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:44.591 [2024-11-26 17:59:26.398852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:44.591 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.591 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.591 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.591 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.591 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.591 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.591 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.591 17:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.591 17:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.849 17:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.849 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.849 "name": "raid_bdev1", 00:14:44.849 "uuid": "6b1c682e-8e34-4bf9-90f3-328ea20e2351", 00:14:44.849 "strip_size_kb": 0, 00:14:44.849 "state": "online", 00:14:44.849 "raid_level": "raid1", 00:14:44.849 "superblock": false, 00:14:44.849 "num_base_bdevs": 4, 00:14:44.849 "num_base_bdevs_discovered": 4, 00:14:44.849 "num_base_bdevs_operational": 4, 00:14:44.849 "process": { 00:14:44.849 "type": "rebuild", 00:14:44.849 "target": "spare", 00:14:44.849 "progress": { 00:14:44.849 "blocks": 10240, 00:14:44.849 "percent": 15 00:14:44.850 } 00:14:44.850 }, 00:14:44.850 "base_bdevs_list": [ 00:14:44.850 { 00:14:44.850 "name": "spare", 00:14:44.850 "uuid": "9263c6f1-a99f-5f82-bccd-609e59dfd6b7", 00:14:44.850 "is_configured": true, 00:14:44.850 "data_offset": 0, 00:14:44.850 "data_size": 65536 00:14:44.850 }, 00:14:44.850 { 00:14:44.850 "name": "BaseBdev2", 00:14:44.850 "uuid": "6ac6f330-8c29-5cf0-92a7-90b5c0a6a1f9", 00:14:44.850 "is_configured": true, 00:14:44.850 "data_offset": 0, 00:14:44.850 "data_size": 65536 00:14:44.850 }, 00:14:44.850 { 00:14:44.850 "name": "BaseBdev3", 00:14:44.850 "uuid": "e99742e0-387b-5f56-95a6-da2950d4e3b3", 00:14:44.850 "is_configured": true, 00:14:44.850 "data_offset": 0, 00:14:44.850 "data_size": 65536 00:14:44.850 }, 00:14:44.850 { 00:14:44.850 "name": "BaseBdev4", 00:14:44.850 "uuid": "5fff1684-f883-5de7-9d13-16fa59b7a22c", 00:14:44.850 "is_configured": true, 00:14:44.850 "data_offset": 0, 00:14:44.850 "data_size": 65536 00:14:44.850 } 00:14:44.850 ] 00:14:44.850 }' 00:14:44.850 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.850 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.850 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.850 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.850 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:44.850 17:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.850 17:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.850 [2024-11-26 17:59:26.576203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.108 [2024-11-26 17:59:26.756365] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:45.108 [2024-11-26 17:59:26.770917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.108 [2024-11-26 17:59:26.771009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.108 [2024-11-26 17:59:26.771029] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:45.109 [2024-11-26 17:59:26.806694] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.109 "name": "raid_bdev1", 00:14:45.109 "uuid": "6b1c682e-8e34-4bf9-90f3-328ea20e2351", 00:14:45.109 "strip_size_kb": 0, 00:14:45.109 "state": "online", 00:14:45.109 "raid_level": "raid1", 00:14:45.109 "superblock": false, 00:14:45.109 "num_base_bdevs": 4, 00:14:45.109 "num_base_bdevs_discovered": 3, 00:14:45.109 "num_base_bdevs_operational": 3, 00:14:45.109 "base_bdevs_list": [ 00:14:45.109 { 00:14:45.109 "name": null, 00:14:45.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.109 "is_configured": false, 00:14:45.109 "data_offset": 0, 00:14:45.109 "data_size": 65536 00:14:45.109 }, 00:14:45.109 { 00:14:45.109 "name": "BaseBdev2", 00:14:45.109 "uuid": "6ac6f330-8c29-5cf0-92a7-90b5c0a6a1f9", 00:14:45.109 "is_configured": true, 00:14:45.109 "data_offset": 0, 00:14:45.109 "data_size": 65536 00:14:45.109 }, 00:14:45.109 { 00:14:45.109 "name": "BaseBdev3", 00:14:45.109 "uuid": "e99742e0-387b-5f56-95a6-da2950d4e3b3", 00:14:45.109 "is_configured": true, 00:14:45.109 "data_offset": 0, 00:14:45.109 "data_size": 65536 00:14:45.109 }, 00:14:45.109 { 00:14:45.109 "name": "BaseBdev4", 00:14:45.109 "uuid": "5fff1684-f883-5de7-9d13-16fa59b7a22c", 00:14:45.109 "is_configured": true, 00:14:45.109 "data_offset": 0, 00:14:45.109 "data_size": 65536 00:14:45.109 } 00:14:45.109 ] 00:14:45.109 }' 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.109 17:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.627 121.50 IOPS, 364.50 MiB/s [2024-11-26T17:59:27.490Z] 17:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.627 17:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.627 17:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.628 17:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.628 17:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.628 17:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.628 17:59:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.628 17:59:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.628 17:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.628 17:59:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.628 17:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.628 "name": "raid_bdev1", 00:14:45.628 "uuid": "6b1c682e-8e34-4bf9-90f3-328ea20e2351", 00:14:45.628 "strip_size_kb": 0, 00:14:45.628 "state": "online", 00:14:45.628 "raid_level": "raid1", 00:14:45.628 "superblock": false, 00:14:45.628 "num_base_bdevs": 4, 00:14:45.628 "num_base_bdevs_discovered": 3, 00:14:45.628 "num_base_bdevs_operational": 3, 00:14:45.628 "base_bdevs_list": [ 00:14:45.628 { 00:14:45.628 "name": null, 00:14:45.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.628 "is_configured": false, 00:14:45.628 "data_offset": 0, 00:14:45.628 "data_size": 65536 00:14:45.628 }, 00:14:45.628 { 00:14:45.628 "name": "BaseBdev2", 00:14:45.628 "uuid": "6ac6f330-8c29-5cf0-92a7-90b5c0a6a1f9", 00:14:45.628 "is_configured": true, 00:14:45.628 "data_offset": 0, 00:14:45.628 "data_size": 65536 00:14:45.628 }, 00:14:45.628 { 00:14:45.628 "name": "BaseBdev3", 00:14:45.628 "uuid": "e99742e0-387b-5f56-95a6-da2950d4e3b3", 00:14:45.628 "is_configured": true, 00:14:45.628 "data_offset": 0, 00:14:45.628 "data_size": 65536 00:14:45.628 }, 00:14:45.628 { 00:14:45.628 "name": "BaseBdev4", 00:14:45.628 "uuid": "5fff1684-f883-5de7-9d13-16fa59b7a22c", 00:14:45.628 "is_configured": true, 00:14:45.628 "data_offset": 0, 00:14:45.628 "data_size": 65536 00:14:45.628 } 00:14:45.628 ] 00:14:45.628 }' 00:14:45.628 17:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.628 17:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.628 17:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.628 17:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.628 17:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:45.628 17:59:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.628 17:59:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.628 [2024-11-26 17:59:27.467196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.924 17:59:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.924 17:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:45.924 [2024-11-26 17:59:27.553225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:45.924 [2024-11-26 17:59:27.555583] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.924 [2024-11-26 17:59:27.666439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:45.924 [2024-11-26 17:59:27.667131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:46.181 [2024-11-26 17:59:27.786984] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:46.181 [2024-11-26 17:59:27.787850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:46.439 128.67 IOPS, 386.00 MiB/s [2024-11-26T17:59:28.302Z] [2024-11-26 17:59:28.180185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:46.698 [2024-11-26 17:59:28.302616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:46.699 [2024-11-26 17:59:28.302987] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:46.699 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.699 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.699 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.699 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.699 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.699 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.699 17:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.699 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.699 17:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.699 17:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.957 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.957 "name": "raid_bdev1", 00:14:46.957 "uuid": "6b1c682e-8e34-4bf9-90f3-328ea20e2351", 00:14:46.957 "strip_size_kb": 0, 00:14:46.957 "state": "online", 00:14:46.957 "raid_level": "raid1", 00:14:46.957 "superblock": false, 00:14:46.957 "num_base_bdevs": 4, 00:14:46.957 "num_base_bdevs_discovered": 4, 00:14:46.957 "num_base_bdevs_operational": 4, 00:14:46.957 "process": { 00:14:46.957 "type": "rebuild", 00:14:46.958 "target": "spare", 00:14:46.958 "progress": { 00:14:46.958 "blocks": 12288, 00:14:46.958 "percent": 18 00:14:46.958 } 00:14:46.958 }, 00:14:46.958 "base_bdevs_list": [ 00:14:46.958 { 00:14:46.958 "name": "spare", 00:14:46.958 "uuid": "9263c6f1-a99f-5f82-bccd-609e59dfd6b7", 00:14:46.958 "is_configured": true, 00:14:46.958 "data_offset": 0, 00:14:46.958 "data_size": 65536 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "name": "BaseBdev2", 00:14:46.958 "uuid": "6ac6f330-8c29-5cf0-92a7-90b5c0a6a1f9", 00:14:46.958 "is_configured": true, 00:14:46.958 "data_offset": 0, 00:14:46.958 "data_size": 65536 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "name": "BaseBdev3", 00:14:46.958 "uuid": "e99742e0-387b-5f56-95a6-da2950d4e3b3", 00:14:46.958 "is_configured": true, 00:14:46.958 "data_offset": 0, 00:14:46.958 "data_size": 65536 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "name": "BaseBdev4", 00:14:46.958 "uuid": "5fff1684-f883-5de7-9d13-16fa59b7a22c", 00:14:46.958 "is_configured": true, 00:14:46.958 "data_offset": 0, 00:14:46.958 "data_size": 65536 00:14:46.958 } 00:14:46.958 ] 00:14:46.958 }' 00:14:46.958 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.958 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.958 [2024-11-26 17:59:28.647332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:46.958 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.958 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.958 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:46.958 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:46.958 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:46.958 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:46.958 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:46.958 17:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.958 17:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.958 [2024-11-26 17:59:28.712387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:46.958 [2024-11-26 17:59:28.804709] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:46.958 [2024-11-26 17:59:28.806511] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:46.958 [2024-11-26 17:59:28.806564] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:46.958 17:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.958 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:46.958 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:46.958 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.218 "name": "raid_bdev1", 00:14:47.218 "uuid": "6b1c682e-8e34-4bf9-90f3-328ea20e2351", 00:14:47.218 "strip_size_kb": 0, 00:14:47.218 "state": "online", 00:14:47.218 "raid_level": "raid1", 00:14:47.218 "superblock": false, 00:14:47.218 "num_base_bdevs": 4, 00:14:47.218 "num_base_bdevs_discovered": 3, 00:14:47.218 "num_base_bdevs_operational": 3, 00:14:47.218 "process": { 00:14:47.218 "type": "rebuild", 00:14:47.218 "target": "spare", 00:14:47.218 "progress": { 00:14:47.218 "blocks": 16384, 00:14:47.218 "percent": 25 00:14:47.218 } 00:14:47.218 }, 00:14:47.218 "base_bdevs_list": [ 00:14:47.218 { 00:14:47.218 "name": "spare", 00:14:47.218 "uuid": "9263c6f1-a99f-5f82-bccd-609e59dfd6b7", 00:14:47.218 "is_configured": true, 00:14:47.218 "data_offset": 0, 00:14:47.218 "data_size": 65536 00:14:47.218 }, 00:14:47.218 { 00:14:47.218 "name": null, 00:14:47.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.218 "is_configured": false, 00:14:47.218 "data_offset": 0, 00:14:47.218 "data_size": 65536 00:14:47.218 }, 00:14:47.218 { 00:14:47.218 "name": "BaseBdev3", 00:14:47.218 "uuid": "e99742e0-387b-5f56-95a6-da2950d4e3b3", 00:14:47.218 "is_configured": true, 00:14:47.218 "data_offset": 0, 00:14:47.218 "data_size": 65536 00:14:47.218 }, 00:14:47.218 { 00:14:47.218 "name": "BaseBdev4", 00:14:47.218 "uuid": "5fff1684-f883-5de7-9d13-16fa59b7a22c", 00:14:47.218 "is_configured": true, 00:14:47.218 "data_offset": 0, 00:14:47.218 "data_size": 65536 00:14:47.218 } 00:14:47.218 ] 00:14:47.218 }' 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=509 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.218 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.219 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.219 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.219 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.219 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.219 17:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.219 17:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.219 17:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.219 17:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.219 17:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.219 "name": "raid_bdev1", 00:14:47.219 "uuid": "6b1c682e-8e34-4bf9-90f3-328ea20e2351", 00:14:47.219 "strip_size_kb": 0, 00:14:47.219 "state": "online", 00:14:47.219 "raid_level": "raid1", 00:14:47.219 "superblock": false, 00:14:47.219 "num_base_bdevs": 4, 00:14:47.219 "num_base_bdevs_discovered": 3, 00:14:47.219 "num_base_bdevs_operational": 3, 00:14:47.219 "process": { 00:14:47.219 "type": "rebuild", 00:14:47.219 "target": "spare", 00:14:47.219 "progress": { 00:14:47.219 "blocks": 18432, 00:14:47.219 "percent": 28 00:14:47.219 } 00:14:47.219 }, 00:14:47.219 "base_bdevs_list": [ 00:14:47.219 { 00:14:47.219 "name": "spare", 00:14:47.219 "uuid": "9263c6f1-a99f-5f82-bccd-609e59dfd6b7", 00:14:47.219 "is_configured": true, 00:14:47.219 "data_offset": 0, 00:14:47.219 "data_size": 65536 00:14:47.219 }, 00:14:47.219 { 00:14:47.219 "name": null, 00:14:47.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.219 "is_configured": false, 00:14:47.219 "data_offset": 0, 00:14:47.219 "data_size": 65536 00:14:47.219 }, 00:14:47.219 { 00:14:47.219 "name": "BaseBdev3", 00:14:47.219 "uuid": "e99742e0-387b-5f56-95a6-da2950d4e3b3", 00:14:47.219 "is_configured": true, 00:14:47.219 "data_offset": 0, 00:14:47.219 "data_size": 65536 00:14:47.219 }, 00:14:47.219 { 00:14:47.219 "name": "BaseBdev4", 00:14:47.219 "uuid": "5fff1684-f883-5de7-9d13-16fa59b7a22c", 00:14:47.219 "is_configured": true, 00:14:47.219 "data_offset": 0, 00:14:47.219 "data_size": 65536 00:14:47.219 } 00:14:47.219 ] 00:14:47.219 }' 00:14:47.219 17:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.219 115.50 IOPS, 346.50 MiB/s [2024-11-26T17:59:29.082Z] [2024-11-26 17:59:29.059457] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:47.492 17:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.492 17:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.492 17:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.492 17:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:47.492 [2024-11-26 17:59:29.271187] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:47.764 [2024-11-26 17:59:29.615882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:48.023 [2024-11-26 17:59:29.827251] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:48.023 [2024-11-26 17:59:29.827860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:48.283 99.80 IOPS, 299.40 MiB/s [2024-11-26T17:59:30.146Z] 17:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:48.283 17:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.283 17:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.283 17:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.283 17:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.283 17:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.283 17:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.283 17:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.283 17:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.283 17:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.542 17:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.542 [2024-11-26 17:59:30.153461] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:48.542 17:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.542 "name": "raid_bdev1", 00:14:48.542 "uuid": "6b1c682e-8e34-4bf9-90f3-328ea20e2351", 00:14:48.542 "strip_size_kb": 0, 00:14:48.542 "state": "online", 00:14:48.542 "raid_level": "raid1", 00:14:48.542 "superblock": false, 00:14:48.542 "num_base_bdevs": 4, 00:14:48.542 "num_base_bdevs_discovered": 3, 00:14:48.542 "num_base_bdevs_operational": 3, 00:14:48.542 "process": { 00:14:48.542 "type": "rebuild", 00:14:48.542 "target": "spare", 00:14:48.542 "progress": { 00:14:48.542 "blocks": 30720, 00:14:48.542 "percent": 46 00:14:48.542 } 00:14:48.542 }, 00:14:48.542 "base_bdevs_list": [ 00:14:48.542 { 00:14:48.542 "name": "spare", 00:14:48.542 "uuid": "9263c6f1-a99f-5f82-bccd-609e59dfd6b7", 00:14:48.542 "is_configured": true, 00:14:48.542 "data_offset": 0, 00:14:48.542 "data_size": 65536 00:14:48.542 }, 00:14:48.542 { 00:14:48.542 "name": null, 00:14:48.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.542 "is_configured": false, 00:14:48.542 "data_offset": 0, 00:14:48.542 "data_size": 65536 00:14:48.542 }, 00:14:48.542 { 00:14:48.542 "name": "BaseBdev3", 00:14:48.542 "uuid": "e99742e0-387b-5f56-95a6-da2950d4e3b3", 00:14:48.542 "is_configured": true, 00:14:48.542 "data_offset": 0, 00:14:48.542 "data_size": 65536 00:14:48.542 }, 00:14:48.542 { 00:14:48.542 "name": "BaseBdev4", 00:14:48.542 "uuid": "5fff1684-f883-5de7-9d13-16fa59b7a22c", 00:14:48.542 "is_configured": true, 00:14:48.542 "data_offset": 0, 00:14:48.542 "data_size": 65536 00:14:48.542 } 00:14:48.542 ] 00:14:48.542 }' 00:14:48.542 17:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.542 17:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.542 17:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.542 17:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.542 17:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:48.801 [2024-11-26 17:59:30.518571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:48.801 [2024-11-26 17:59:30.645827] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:48.801 [2024-11-26 17:59:30.646806] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:49.368 [2024-11-26 17:59:30.986056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:49.626 94.00 IOPS, 282.00 MiB/s [2024-11-26T17:59:31.489Z] 17:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.626 17:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.626 17:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.627 17:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.627 17:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.627 17:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.627 17:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.627 17:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.627 17:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.627 17:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.627 17:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.627 17:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.627 "name": "raid_bdev1", 00:14:49.627 "uuid": "6b1c682e-8e34-4bf9-90f3-328ea20e2351", 00:14:49.627 "strip_size_kb": 0, 00:14:49.627 "state": "online", 00:14:49.627 "raid_level": "raid1", 00:14:49.627 "superblock": false, 00:14:49.627 "num_base_bdevs": 4, 00:14:49.627 "num_base_bdevs_discovered": 3, 00:14:49.627 "num_base_bdevs_operational": 3, 00:14:49.627 "process": { 00:14:49.627 "type": "rebuild", 00:14:49.627 "target": "spare", 00:14:49.627 "progress": { 00:14:49.627 "blocks": 47104, 00:14:49.627 "percent": 71 00:14:49.627 } 00:14:49.627 }, 00:14:49.627 "base_bdevs_list": [ 00:14:49.627 { 00:14:49.627 "name": "spare", 00:14:49.627 "uuid": "9263c6f1-a99f-5f82-bccd-609e59dfd6b7", 00:14:49.627 "is_configured": true, 00:14:49.627 "data_offset": 0, 00:14:49.627 "data_size": 65536 00:14:49.627 }, 00:14:49.627 { 00:14:49.627 "name": null, 00:14:49.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.627 "is_configured": false, 00:14:49.627 "data_offset": 0, 00:14:49.627 "data_size": 65536 00:14:49.627 }, 00:14:49.627 { 00:14:49.627 "name": "BaseBdev3", 00:14:49.627 "uuid": "e99742e0-387b-5f56-95a6-da2950d4e3b3", 00:14:49.627 "is_configured": true, 00:14:49.627 "data_offset": 0, 00:14:49.627 "data_size": 65536 00:14:49.627 }, 00:14:49.627 { 00:14:49.627 "name": "BaseBdev4", 00:14:49.627 "uuid": "5fff1684-f883-5de7-9d13-16fa59b7a22c", 00:14:49.627 "is_configured": true, 00:14:49.627 "data_offset": 0, 00:14:49.627 "data_size": 65536 00:14:49.627 } 00:14:49.627 ] 00:14:49.627 }' 00:14:49.627 17:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.627 17:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.627 17:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.627 17:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.627 17:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:50.454 84.71 IOPS, 254.14 MiB/s [2024-11-26T17:59:32.317Z] [2024-11-26 17:59:32.221113] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:50.712 [2024-11-26 17:59:32.327232] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:50.712 [2024-11-26 17:59:32.332268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.712 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.712 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.712 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.712 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.712 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.712 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.712 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.712 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.712 17:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.712 17:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.712 17:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.712 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.712 "name": "raid_bdev1", 00:14:50.712 "uuid": "6b1c682e-8e34-4bf9-90f3-328ea20e2351", 00:14:50.712 "strip_size_kb": 0, 00:14:50.712 "state": "online", 00:14:50.712 "raid_level": "raid1", 00:14:50.712 "superblock": false, 00:14:50.712 "num_base_bdevs": 4, 00:14:50.712 "num_base_bdevs_discovered": 3, 00:14:50.712 "num_base_bdevs_operational": 3, 00:14:50.712 "base_bdevs_list": [ 00:14:50.712 { 00:14:50.712 "name": "spare", 00:14:50.712 "uuid": "9263c6f1-a99f-5f82-bccd-609e59dfd6b7", 00:14:50.712 "is_configured": true, 00:14:50.712 "data_offset": 0, 00:14:50.712 "data_size": 65536 00:14:50.712 }, 00:14:50.712 { 00:14:50.712 "name": null, 00:14:50.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.712 "is_configured": false, 00:14:50.712 "data_offset": 0, 00:14:50.712 "data_size": 65536 00:14:50.712 }, 00:14:50.712 { 00:14:50.712 "name": "BaseBdev3", 00:14:50.712 "uuid": "e99742e0-387b-5f56-95a6-da2950d4e3b3", 00:14:50.712 "is_configured": true, 00:14:50.712 "data_offset": 0, 00:14:50.712 "data_size": 65536 00:14:50.712 }, 00:14:50.712 { 00:14:50.713 "name": "BaseBdev4", 00:14:50.713 "uuid": "5fff1684-f883-5de7-9d13-16fa59b7a22c", 00:14:50.713 "is_configured": true, 00:14:50.713 "data_offset": 0, 00:14:50.713 "data_size": 65536 00:14:50.713 } 00:14:50.713 ] 00:14:50.713 }' 00:14:50.713 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.713 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:50.713 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.713 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:50.713 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:50.713 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:50.713 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.713 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:50.713 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:50.713 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.713 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.713 17:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.713 17:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.713 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.972 "name": "raid_bdev1", 00:14:50.972 "uuid": "6b1c682e-8e34-4bf9-90f3-328ea20e2351", 00:14:50.972 "strip_size_kb": 0, 00:14:50.972 "state": "online", 00:14:50.972 "raid_level": "raid1", 00:14:50.972 "superblock": false, 00:14:50.972 "num_base_bdevs": 4, 00:14:50.972 "num_base_bdevs_discovered": 3, 00:14:50.972 "num_base_bdevs_operational": 3, 00:14:50.972 "base_bdevs_list": [ 00:14:50.972 { 00:14:50.972 "name": "spare", 00:14:50.972 "uuid": "9263c6f1-a99f-5f82-bccd-609e59dfd6b7", 00:14:50.972 "is_configured": true, 00:14:50.972 "data_offset": 0, 00:14:50.972 "data_size": 65536 00:14:50.972 }, 00:14:50.972 { 00:14:50.972 "name": null, 00:14:50.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.972 "is_configured": false, 00:14:50.972 "data_offset": 0, 00:14:50.972 "data_size": 65536 00:14:50.972 }, 00:14:50.972 { 00:14:50.972 "name": "BaseBdev3", 00:14:50.972 "uuid": "e99742e0-387b-5f56-95a6-da2950d4e3b3", 00:14:50.972 "is_configured": true, 00:14:50.972 "data_offset": 0, 00:14:50.972 "data_size": 65536 00:14:50.972 }, 00:14:50.972 { 00:14:50.972 "name": "BaseBdev4", 00:14:50.972 "uuid": "5fff1684-f883-5de7-9d13-16fa59b7a22c", 00:14:50.972 "is_configured": true, 00:14:50.972 "data_offset": 0, 00:14:50.972 "data_size": 65536 00:14:50.972 } 00:14:50.972 ] 00:14:50.972 }' 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.972 17:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.973 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.973 "name": "raid_bdev1", 00:14:50.973 "uuid": "6b1c682e-8e34-4bf9-90f3-328ea20e2351", 00:14:50.973 "strip_size_kb": 0, 00:14:50.973 "state": "online", 00:14:50.973 "raid_level": "raid1", 00:14:50.973 "superblock": false, 00:14:50.973 "num_base_bdevs": 4, 00:14:50.973 "num_base_bdevs_discovered": 3, 00:14:50.973 "num_base_bdevs_operational": 3, 00:14:50.973 "base_bdevs_list": [ 00:14:50.973 { 00:14:50.973 "name": "spare", 00:14:50.973 "uuid": "9263c6f1-a99f-5f82-bccd-609e59dfd6b7", 00:14:50.973 "is_configured": true, 00:14:50.973 "data_offset": 0, 00:14:50.973 "data_size": 65536 00:14:50.973 }, 00:14:50.973 { 00:14:50.973 "name": null, 00:14:50.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.973 "is_configured": false, 00:14:50.973 "data_offset": 0, 00:14:50.973 "data_size": 65536 00:14:50.973 }, 00:14:50.973 { 00:14:50.973 "name": "BaseBdev3", 00:14:50.973 "uuid": "e99742e0-387b-5f56-95a6-da2950d4e3b3", 00:14:50.973 "is_configured": true, 00:14:50.973 "data_offset": 0, 00:14:50.973 "data_size": 65536 00:14:50.973 }, 00:14:50.973 { 00:14:50.973 "name": "BaseBdev4", 00:14:50.973 "uuid": "5fff1684-f883-5de7-9d13-16fa59b7a22c", 00:14:50.973 "is_configured": true, 00:14:50.973 "data_offset": 0, 00:14:50.973 "data_size": 65536 00:14:50.973 } 00:14:50.973 ] 00:14:50.973 }' 00:14:50.973 17:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.973 17:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.490 80.12 IOPS, 240.38 MiB/s [2024-11-26T17:59:33.353Z] 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.490 [2024-11-26 17:59:33.148575] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:51.490 [2024-11-26 17:59:33.148628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.490 00:14:51.490 Latency(us) 00:14:51.490 [2024-11-26T17:59:33.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.490 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:51.490 raid_bdev1 : 8.22 78.59 235.78 0.00 0.00 17302.93 352.36 121799.66 00:14:51.490 [2024-11-26T17:59:33.353Z] =================================================================================================================== 00:14:51.490 [2024-11-26T17:59:33.353Z] Total : 78.59 235.78 0.00 0.00 17302.93 352.36 121799.66 00:14:51.490 [2024-11-26 17:59:33.252151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.490 [2024-11-26 17:59:33.252245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.490 [2024-11-26 17:59:33.252363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.490 [2024-11-26 17:59:33.252376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:51.490 { 00:14:51.490 "results": [ 00:14:51.490 { 00:14:51.490 "job": "raid_bdev1", 00:14:51.490 "core_mask": "0x1", 00:14:51.490 "workload": "randrw", 00:14:51.490 "percentage": 50, 00:14:51.490 "status": "finished", 00:14:51.490 "queue_depth": 2, 00:14:51.490 "io_size": 3145728, 00:14:51.490 "runtime": 8.21941, 00:14:51.490 "iops": 78.59444899329758, 00:14:51.490 "mibps": 235.7833469798927, 00:14:51.490 "io_failed": 0, 00:14:51.490 "io_timeout": 0, 00:14:51.490 "avg_latency_us": 17302.933643381508, 00:14:51.490 "min_latency_us": 352.3633187772926, 00:14:51.490 "max_latency_us": 121799.6576419214 00:14:51.490 } 00:14:51.490 ], 00:14:51.490 "core_count": 1 00:14:51.490 } 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:51.490 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:51.752 /dev/nbd0 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.752 1+0 records in 00:14:51.752 1+0 records out 00:14:51.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357911 s, 11.4 MB/s 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:51.752 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:52.012 /dev/nbd1 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:52.012 1+0 records in 00:14:52.012 1+0 records out 00:14:52.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037624 s, 10.9 MB/s 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:52.012 17:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:52.271 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:52.271 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:52.271 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:52.271 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:52.271 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:52.271 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:52.271 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:52.529 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:52.529 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:52.529 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:52.529 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:52.529 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:52.529 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:52.529 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:52.529 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:52.529 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:52.530 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:52.530 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:52.530 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:52.530 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:52.530 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:52.530 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:52.530 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:52.530 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:52.530 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:52.530 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:52.530 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:52.789 /dev/nbd1 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:52.789 1+0 records in 00:14:52.789 1+0 records out 00:14:52.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618602 s, 6.6 MB/s 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:52.789 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:53.048 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:53.048 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:53.048 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:53.048 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.048 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.048 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:53.048 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:53.049 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.049 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:53.049 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.049 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:53.049 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:53.049 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:53.049 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.049 17:59:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79123 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79123 ']' 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79123 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79123 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.308 killing process with pid 79123 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79123' 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79123 00:14:53.308 Received shutdown signal, test time was about 10.126686 seconds 00:14:53.308 00:14:53.308 Latency(us) 00:14:53.308 [2024-11-26T17:59:35.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.308 [2024-11-26T17:59:35.171Z] =================================================================================================================== 00:14:53.308 [2024-11-26T17:59:35.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:53.308 [2024-11-26 17:59:35.132013] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.308 17:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79123 00:14:53.877 [2024-11-26 17:59:35.620702] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.252 17:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:55.252 00:14:55.252 real 0m13.946s 00:14:55.252 user 0m17.602s 00:14:55.252 sys 0m1.917s 00:14:55.252 17:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.253 17:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.253 ************************************ 00:14:55.253 END TEST raid_rebuild_test_io 00:14:55.253 ************************************ 00:14:55.253 17:59:36 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:55.253 17:59:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:55.253 17:59:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.253 17:59:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:55.253 ************************************ 00:14:55.253 START TEST raid_rebuild_test_sb_io 00:14:55.253 ************************************ 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79536 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79536 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79536 ']' 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.253 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.253 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:55.253 Zero copy mechanism will not be used. 00:14:55.253 [2024-11-26 17:59:37.111899] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:14:55.253 [2024-11-26 17:59:37.112036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79536 ] 00:14:55.512 [2024-11-26 17:59:37.288451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.774 [2024-11-26 17:59:37.418755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.033 [2024-11-26 17:59:37.640990] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.033 [2024-11-26 17:59:37.641072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.292 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.292 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:56.292 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:56.292 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:56.292 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.292 17:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.292 BaseBdev1_malloc 00:14:56.292 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.292 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:56.292 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.292 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.292 [2024-11-26 17:59:38.028889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:56.292 [2024-11-26 17:59:38.028956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.292 [2024-11-26 17:59:38.028981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:56.292 [2024-11-26 17:59:38.028994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.292 [2024-11-26 17:59:38.031418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.292 [2024-11-26 17:59:38.031457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:56.293 BaseBdev1 00:14:56.293 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.293 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:56.293 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:56.293 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.293 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.293 BaseBdev2_malloc 00:14:56.293 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.293 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:56.293 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.293 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.293 [2024-11-26 17:59:38.087260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:56.293 [2024-11-26 17:59:38.087330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.293 [2024-11-26 17:59:38.087357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:56.293 [2024-11-26 17:59:38.087370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.293 [2024-11-26 17:59:38.089735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.293 [2024-11-26 17:59:38.089773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:56.293 BaseBdev2 00:14:56.293 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.293 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:56.293 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:56.293 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.293 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.553 BaseBdev3_malloc 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.553 [2024-11-26 17:59:38.163995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:56.553 [2024-11-26 17:59:38.164072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.553 [2024-11-26 17:59:38.164100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:56.553 [2024-11-26 17:59:38.164114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.553 [2024-11-26 17:59:38.166754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.553 [2024-11-26 17:59:38.166797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:56.553 BaseBdev3 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.553 BaseBdev4_malloc 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.553 [2024-11-26 17:59:38.224830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:56.553 [2024-11-26 17:59:38.224896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.553 [2024-11-26 17:59:38.224920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:56.553 [2024-11-26 17:59:38.224932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.553 [2024-11-26 17:59:38.227421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.553 [2024-11-26 17:59:38.227463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:56.553 BaseBdev4 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.553 spare_malloc 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.553 spare_delay 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.553 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.553 [2024-11-26 17:59:38.294852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:56.553 [2024-11-26 17:59:38.294907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.553 [2024-11-26 17:59:38.294929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:56.553 [2024-11-26 17:59:38.294940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.554 [2024-11-26 17:59:38.297299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.554 [2024-11-26 17:59:38.297333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:56.554 spare 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.554 [2024-11-26 17:59:38.306876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.554 [2024-11-26 17:59:38.308870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.554 [2024-11-26 17:59:38.308958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:56.554 [2024-11-26 17:59:38.309018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:56.554 [2024-11-26 17:59:38.309238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:56.554 [2024-11-26 17:59:38.309263] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:56.554 [2024-11-26 17:59:38.309568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:56.554 [2024-11-26 17:59:38.309783] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:56.554 [2024-11-26 17:59:38.309803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:56.554 [2024-11-26 17:59:38.309984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.554 "name": "raid_bdev1", 00:14:56.554 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:14:56.554 "strip_size_kb": 0, 00:14:56.554 "state": "online", 00:14:56.554 "raid_level": "raid1", 00:14:56.554 "superblock": true, 00:14:56.554 "num_base_bdevs": 4, 00:14:56.554 "num_base_bdevs_discovered": 4, 00:14:56.554 "num_base_bdevs_operational": 4, 00:14:56.554 "base_bdevs_list": [ 00:14:56.554 { 00:14:56.554 "name": "BaseBdev1", 00:14:56.554 "uuid": "d957efa6-c13e-5b10-ace6-49bef2a2b6a6", 00:14:56.554 "is_configured": true, 00:14:56.554 "data_offset": 2048, 00:14:56.554 "data_size": 63488 00:14:56.554 }, 00:14:56.554 { 00:14:56.554 "name": "BaseBdev2", 00:14:56.554 "uuid": "9ace477c-b6e7-50ff-9917-437221daa1b7", 00:14:56.554 "is_configured": true, 00:14:56.554 "data_offset": 2048, 00:14:56.554 "data_size": 63488 00:14:56.554 }, 00:14:56.554 { 00:14:56.554 "name": "BaseBdev3", 00:14:56.554 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:14:56.554 "is_configured": true, 00:14:56.554 "data_offset": 2048, 00:14:56.554 "data_size": 63488 00:14:56.554 }, 00:14:56.554 { 00:14:56.554 "name": "BaseBdev4", 00:14:56.554 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:14:56.554 "is_configured": true, 00:14:56.554 "data_offset": 2048, 00:14:56.554 "data_size": 63488 00:14:56.554 } 00:14:56.554 ] 00:14:56.554 }' 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.554 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.122 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:57.122 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.123 [2024-11-26 17:59:38.770510] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.123 [2024-11-26 17:59:38.853935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.123 "name": "raid_bdev1", 00:14:57.123 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:14:57.123 "strip_size_kb": 0, 00:14:57.123 "state": "online", 00:14:57.123 "raid_level": "raid1", 00:14:57.123 "superblock": true, 00:14:57.123 "num_base_bdevs": 4, 00:14:57.123 "num_base_bdevs_discovered": 3, 00:14:57.123 "num_base_bdevs_operational": 3, 00:14:57.123 "base_bdevs_list": [ 00:14:57.123 { 00:14:57.123 "name": null, 00:14:57.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.123 "is_configured": false, 00:14:57.123 "data_offset": 0, 00:14:57.123 "data_size": 63488 00:14:57.123 }, 00:14:57.123 { 00:14:57.123 "name": "BaseBdev2", 00:14:57.123 "uuid": "9ace477c-b6e7-50ff-9917-437221daa1b7", 00:14:57.123 "is_configured": true, 00:14:57.123 "data_offset": 2048, 00:14:57.123 "data_size": 63488 00:14:57.123 }, 00:14:57.123 { 00:14:57.123 "name": "BaseBdev3", 00:14:57.123 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:14:57.123 "is_configured": true, 00:14:57.123 "data_offset": 2048, 00:14:57.123 "data_size": 63488 00:14:57.123 }, 00:14:57.123 { 00:14:57.123 "name": "BaseBdev4", 00:14:57.123 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:14:57.123 "is_configured": true, 00:14:57.123 "data_offset": 2048, 00:14:57.123 "data_size": 63488 00:14:57.123 } 00:14:57.123 ] 00:14:57.123 }' 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.123 17:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.123 [2024-11-26 17:59:38.962749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:57.123 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:57.123 Zero copy mechanism will not be used. 00:14:57.123 Running I/O for 60 seconds... 00:14:57.689 17:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:57.689 17:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.689 17:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.689 [2024-11-26 17:59:39.338563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:57.689 17:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.689 17:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:57.689 [2024-11-26 17:59:39.408549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:57.689 [2024-11-26 17:59:39.410762] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:57.689 [2024-11-26 17:59:39.543635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:57.689 [2024-11-26 17:59:39.544290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:57.958 [2024-11-26 17:59:39.657982] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:58.224 202.00 IOPS, 606.00 MiB/s [2024-11-26T17:59:40.087Z] [2024-11-26 17:59:40.042962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.791 [2024-11-26 17:59:40.392755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.791 "name": "raid_bdev1", 00:14:58.791 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:14:58.791 "strip_size_kb": 0, 00:14:58.791 "state": "online", 00:14:58.791 "raid_level": "raid1", 00:14:58.791 "superblock": true, 00:14:58.791 "num_base_bdevs": 4, 00:14:58.791 "num_base_bdevs_discovered": 4, 00:14:58.791 "num_base_bdevs_operational": 4, 00:14:58.791 "process": { 00:14:58.791 "type": "rebuild", 00:14:58.791 "target": "spare", 00:14:58.791 "progress": { 00:14:58.791 "blocks": 16384, 00:14:58.791 "percent": 25 00:14:58.791 } 00:14:58.791 }, 00:14:58.791 "base_bdevs_list": [ 00:14:58.791 { 00:14:58.791 "name": "spare", 00:14:58.791 "uuid": "8dcf881e-8927-5266-88e2-8526e5e52342", 00:14:58.791 "is_configured": true, 00:14:58.791 "data_offset": 2048, 00:14:58.791 "data_size": 63488 00:14:58.791 }, 00:14:58.791 { 00:14:58.791 "name": "BaseBdev2", 00:14:58.791 "uuid": "9ace477c-b6e7-50ff-9917-437221daa1b7", 00:14:58.791 "is_configured": true, 00:14:58.791 "data_offset": 2048, 00:14:58.791 "data_size": 63488 00:14:58.791 }, 00:14:58.791 { 00:14:58.791 "name": "BaseBdev3", 00:14:58.791 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:14:58.791 "is_configured": true, 00:14:58.791 "data_offset": 2048, 00:14:58.791 "data_size": 63488 00:14:58.791 }, 00:14:58.791 { 00:14:58.791 "name": "BaseBdev4", 00:14:58.791 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:14:58.791 "is_configured": true, 00:14:58.791 "data_offset": 2048, 00:14:58.791 "data_size": 63488 00:14:58.791 } 00:14:58.791 ] 00:14:58.791 }' 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.791 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.791 [2024-11-26 17:59:40.537226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:59.050 [2024-11-26 17:59:40.756712] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:59.050 [2024-11-26 17:59:40.772614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.050 [2024-11-26 17:59:40.772735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:59.050 [2024-11-26 17:59:40.772755] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:59.050 [2024-11-26 17:59:40.816864] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.050 "name": "raid_bdev1", 00:14:59.050 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:14:59.050 "strip_size_kb": 0, 00:14:59.050 "state": "online", 00:14:59.050 "raid_level": "raid1", 00:14:59.050 "superblock": true, 00:14:59.050 "num_base_bdevs": 4, 00:14:59.050 "num_base_bdevs_discovered": 3, 00:14:59.050 "num_base_bdevs_operational": 3, 00:14:59.050 "base_bdevs_list": [ 00:14:59.050 { 00:14:59.050 "name": null, 00:14:59.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.050 "is_configured": false, 00:14:59.050 "data_offset": 0, 00:14:59.050 "data_size": 63488 00:14:59.050 }, 00:14:59.050 { 00:14:59.050 "name": "BaseBdev2", 00:14:59.050 "uuid": "9ace477c-b6e7-50ff-9917-437221daa1b7", 00:14:59.050 "is_configured": true, 00:14:59.050 "data_offset": 2048, 00:14:59.050 "data_size": 63488 00:14:59.050 }, 00:14:59.050 { 00:14:59.050 "name": "BaseBdev3", 00:14:59.050 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:14:59.050 "is_configured": true, 00:14:59.050 "data_offset": 2048, 00:14:59.050 "data_size": 63488 00:14:59.050 }, 00:14:59.050 { 00:14:59.050 "name": "BaseBdev4", 00:14:59.050 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:14:59.050 "is_configured": true, 00:14:59.050 "data_offset": 2048, 00:14:59.050 "data_size": 63488 00:14:59.050 } 00:14:59.050 ] 00:14:59.050 }' 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.050 17:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.566 142.00 IOPS, 426.00 MiB/s [2024-11-26T17:59:41.429Z] 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:59.566 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.566 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:59.566 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:59.566 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.566 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.566 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.566 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.566 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.566 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.566 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.566 "name": "raid_bdev1", 00:14:59.566 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:14:59.566 "strip_size_kb": 0, 00:14:59.566 "state": "online", 00:14:59.566 "raid_level": "raid1", 00:14:59.566 "superblock": true, 00:14:59.566 "num_base_bdevs": 4, 00:14:59.566 "num_base_bdevs_discovered": 3, 00:14:59.566 "num_base_bdevs_operational": 3, 00:14:59.566 "base_bdevs_list": [ 00:14:59.566 { 00:14:59.566 "name": null, 00:14:59.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.566 "is_configured": false, 00:14:59.566 "data_offset": 0, 00:14:59.566 "data_size": 63488 00:14:59.566 }, 00:14:59.566 { 00:14:59.566 "name": "BaseBdev2", 00:14:59.566 "uuid": "9ace477c-b6e7-50ff-9917-437221daa1b7", 00:14:59.566 "is_configured": true, 00:14:59.566 "data_offset": 2048, 00:14:59.566 "data_size": 63488 00:14:59.566 }, 00:14:59.566 { 00:14:59.566 "name": "BaseBdev3", 00:14:59.566 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:14:59.566 "is_configured": true, 00:14:59.566 "data_offset": 2048, 00:14:59.566 "data_size": 63488 00:14:59.566 }, 00:14:59.566 { 00:14:59.566 "name": "BaseBdev4", 00:14:59.566 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:14:59.566 "is_configured": true, 00:14:59.566 "data_offset": 2048, 00:14:59.566 "data_size": 63488 00:14:59.566 } 00:14:59.566 ] 00:14:59.566 }' 00:14:59.566 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.566 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:59.566 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.824 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:59.824 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:59.824 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.824 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.824 [2024-11-26 17:59:41.455518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.824 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.824 17:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:59.824 [2024-11-26 17:59:41.518048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:59.824 [2024-11-26 17:59:41.520874] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:59.824 [2024-11-26 17:59:41.673192] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:00.082 [2024-11-26 17:59:41.806299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:00.082 [2024-11-26 17:59:41.806877] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:00.340 154.67 IOPS, 464.00 MiB/s [2024-11-26T17:59:42.203Z] [2024-11-26 17:59:42.070157] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:00.340 [2024-11-26 17:59:42.071168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:00.598 [2024-11-26 17:59:42.222185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.856 "name": "raid_bdev1", 00:15:00.856 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:00.856 "strip_size_kb": 0, 00:15:00.856 "state": "online", 00:15:00.856 "raid_level": "raid1", 00:15:00.856 "superblock": true, 00:15:00.856 "num_base_bdevs": 4, 00:15:00.856 "num_base_bdevs_discovered": 4, 00:15:00.856 "num_base_bdevs_operational": 4, 00:15:00.856 "process": { 00:15:00.856 "type": "rebuild", 00:15:00.856 "target": "spare", 00:15:00.856 "progress": { 00:15:00.856 "blocks": 14336, 00:15:00.856 "percent": 22 00:15:00.856 } 00:15:00.856 }, 00:15:00.856 "base_bdevs_list": [ 00:15:00.856 { 00:15:00.856 "name": "spare", 00:15:00.856 "uuid": "8dcf881e-8927-5266-88e2-8526e5e52342", 00:15:00.856 "is_configured": true, 00:15:00.856 "data_offset": 2048, 00:15:00.856 "data_size": 63488 00:15:00.856 }, 00:15:00.856 { 00:15:00.856 "name": "BaseBdev2", 00:15:00.856 "uuid": "9ace477c-b6e7-50ff-9917-437221daa1b7", 00:15:00.856 "is_configured": true, 00:15:00.856 "data_offset": 2048, 00:15:00.856 "data_size": 63488 00:15:00.856 }, 00:15:00.856 { 00:15:00.856 "name": "BaseBdev3", 00:15:00.856 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:00.856 "is_configured": true, 00:15:00.856 "data_offset": 2048, 00:15:00.856 "data_size": 63488 00:15:00.856 }, 00:15:00.856 { 00:15:00.856 "name": "BaseBdev4", 00:15:00.856 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:00.856 "is_configured": true, 00:15:00.856 "data_offset": 2048, 00:15:00.856 "data_size": 63488 00:15:00.856 } 00:15:00.856 ] 00:15:00.856 }' 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.856 [2024-11-26 17:59:42.598382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:00.856 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.856 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.856 [2024-11-26 17:59:42.649586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:01.114 [2024-11-26 17:59:42.934522] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:01.114 [2024-11-26 17:59:42.934596] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:01.114 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.114 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:01.114 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:01.114 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.114 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.114 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.114 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.114 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.114 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.114 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.114 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.114 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.114 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.371 130.00 IOPS, 390.00 MiB/s [2024-11-26T17:59:43.234Z] 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.371 "name": "raid_bdev1", 00:15:01.371 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:01.371 "strip_size_kb": 0, 00:15:01.371 "state": "online", 00:15:01.371 "raid_level": "raid1", 00:15:01.371 "superblock": true, 00:15:01.371 "num_base_bdevs": 4, 00:15:01.371 "num_base_bdevs_discovered": 3, 00:15:01.371 "num_base_bdevs_operational": 3, 00:15:01.371 "process": { 00:15:01.371 "type": "rebuild", 00:15:01.371 "target": "spare", 00:15:01.371 "progress": { 00:15:01.371 "blocks": 18432, 00:15:01.371 "percent": 29 00:15:01.371 } 00:15:01.371 }, 00:15:01.371 "base_bdevs_list": [ 00:15:01.371 { 00:15:01.371 "name": "spare", 00:15:01.371 "uuid": "8dcf881e-8927-5266-88e2-8526e5e52342", 00:15:01.371 "is_configured": true, 00:15:01.371 "data_offset": 2048, 00:15:01.371 "data_size": 63488 00:15:01.371 }, 00:15:01.371 { 00:15:01.371 "name": null, 00:15:01.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.371 "is_configured": false, 00:15:01.371 "data_offset": 0, 00:15:01.371 "data_size": 63488 00:15:01.371 }, 00:15:01.371 { 00:15:01.371 "name": "BaseBdev3", 00:15:01.371 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:01.371 "is_configured": true, 00:15:01.371 "data_offset": 2048, 00:15:01.371 "data_size": 63488 00:15:01.371 }, 00:15:01.371 { 00:15:01.371 "name": "BaseBdev4", 00:15:01.371 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:01.371 "is_configured": true, 00:15:01.371 "data_offset": 2048, 00:15:01.371 "data_size": 63488 00:15:01.371 } 00:15:01.371 ] 00:15:01.371 }' 00:15:01.371 17:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.371 [2024-11-26 17:59:43.058486] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=524 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.371 "name": "raid_bdev1", 00:15:01.371 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:01.371 "strip_size_kb": 0, 00:15:01.371 "state": "online", 00:15:01.371 "raid_level": "raid1", 00:15:01.371 "superblock": true, 00:15:01.371 "num_base_bdevs": 4, 00:15:01.371 "num_base_bdevs_discovered": 3, 00:15:01.371 "num_base_bdevs_operational": 3, 00:15:01.371 "process": { 00:15:01.371 "type": "rebuild", 00:15:01.371 "target": "spare", 00:15:01.371 "progress": { 00:15:01.371 "blocks": 20480, 00:15:01.371 "percent": 32 00:15:01.371 } 00:15:01.371 }, 00:15:01.371 "base_bdevs_list": [ 00:15:01.371 { 00:15:01.371 "name": "spare", 00:15:01.371 "uuid": "8dcf881e-8927-5266-88e2-8526e5e52342", 00:15:01.371 "is_configured": true, 00:15:01.371 "data_offset": 2048, 00:15:01.371 "data_size": 63488 00:15:01.371 }, 00:15:01.371 { 00:15:01.371 "name": null, 00:15:01.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.371 "is_configured": false, 00:15:01.371 "data_offset": 0, 00:15:01.371 "data_size": 63488 00:15:01.371 }, 00:15:01.371 { 00:15:01.371 "name": "BaseBdev3", 00:15:01.371 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:01.371 "is_configured": true, 00:15:01.371 "data_offset": 2048, 00:15:01.371 "data_size": 63488 00:15:01.371 }, 00:15:01.371 { 00:15:01.371 "name": "BaseBdev4", 00:15:01.371 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:01.371 "is_configured": true, 00:15:01.371 "data_offset": 2048, 00:15:01.371 "data_size": 63488 00:15:01.371 } 00:15:01.371 ] 00:15:01.371 }' 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.371 17:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:01.628 [2024-11-26 17:59:43.288471] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:02.465 113.80 IOPS, 341.40 MiB/s [2024-11-26T17:59:44.328Z] 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.465 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.465 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.465 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.465 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.465 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.466 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.466 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.466 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.466 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.466 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.466 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.466 "name": "raid_bdev1", 00:15:02.466 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:02.466 "strip_size_kb": 0, 00:15:02.466 "state": "online", 00:15:02.466 "raid_level": "raid1", 00:15:02.466 "superblock": true, 00:15:02.466 "num_base_bdevs": 4, 00:15:02.466 "num_base_bdevs_discovered": 3, 00:15:02.466 "num_base_bdevs_operational": 3, 00:15:02.466 "process": { 00:15:02.466 "type": "rebuild", 00:15:02.466 "target": "spare", 00:15:02.466 "progress": { 00:15:02.466 "blocks": 38912, 00:15:02.466 "percent": 61 00:15:02.466 } 00:15:02.466 }, 00:15:02.466 "base_bdevs_list": [ 00:15:02.466 { 00:15:02.466 "name": "spare", 00:15:02.466 "uuid": "8dcf881e-8927-5266-88e2-8526e5e52342", 00:15:02.466 "is_configured": true, 00:15:02.466 "data_offset": 2048, 00:15:02.466 "data_size": 63488 00:15:02.466 }, 00:15:02.466 { 00:15:02.466 "name": null, 00:15:02.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.466 "is_configured": false, 00:15:02.466 "data_offset": 0, 00:15:02.466 "data_size": 63488 00:15:02.466 }, 00:15:02.466 { 00:15:02.466 "name": "BaseBdev3", 00:15:02.466 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:02.466 "is_configured": true, 00:15:02.466 "data_offset": 2048, 00:15:02.466 "data_size": 63488 00:15:02.466 }, 00:15:02.466 { 00:15:02.466 "name": "BaseBdev4", 00:15:02.466 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:02.466 "is_configured": true, 00:15:02.466 "data_offset": 2048, 00:15:02.466 "data_size": 63488 00:15:02.466 } 00:15:02.466 ] 00:15:02.466 }' 00:15:02.466 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.466 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.466 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.466 [2024-11-26 17:59:44.324976] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:02.724 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.724 17:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:02.983 [2024-11-26 17:59:44.662631] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:02.983 [2024-11-26 17:59:44.772078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:03.500 101.50 IOPS, 304.50 MiB/s [2024-11-26T17:59:45.363Z] 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.500 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.500 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.500 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.500 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.500 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.759 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.759 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.759 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.759 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.759 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.759 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.759 "name": "raid_bdev1", 00:15:03.759 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:03.759 "strip_size_kb": 0, 00:15:03.759 "state": "online", 00:15:03.759 "raid_level": "raid1", 00:15:03.759 "superblock": true, 00:15:03.759 "num_base_bdevs": 4, 00:15:03.759 "num_base_bdevs_discovered": 3, 00:15:03.759 "num_base_bdevs_operational": 3, 00:15:03.759 "process": { 00:15:03.759 "type": "rebuild", 00:15:03.759 "target": "spare", 00:15:03.759 "progress": { 00:15:03.759 "blocks": 55296, 00:15:03.759 "percent": 87 00:15:03.759 } 00:15:03.759 }, 00:15:03.759 "base_bdevs_list": [ 00:15:03.759 { 00:15:03.759 "name": "spare", 00:15:03.759 "uuid": "8dcf881e-8927-5266-88e2-8526e5e52342", 00:15:03.759 "is_configured": true, 00:15:03.759 "data_offset": 2048, 00:15:03.759 "data_size": 63488 00:15:03.759 }, 00:15:03.759 { 00:15:03.759 "name": null, 00:15:03.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.759 "is_configured": false, 00:15:03.759 "data_offset": 0, 00:15:03.759 "data_size": 63488 00:15:03.759 }, 00:15:03.759 { 00:15:03.759 "name": "BaseBdev3", 00:15:03.759 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:03.759 "is_configured": true, 00:15:03.759 "data_offset": 2048, 00:15:03.759 "data_size": 63488 00:15:03.759 }, 00:15:03.759 { 00:15:03.759 "name": "BaseBdev4", 00:15:03.759 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:03.759 "is_configured": true, 00:15:03.759 "data_offset": 2048, 00:15:03.759 "data_size": 63488 00:15:03.759 } 00:15:03.759 ] 00:15:03.759 }' 00:15:03.759 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.759 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.759 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.759 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.759 17:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.759 [2024-11-26 17:59:45.542271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:04.017 [2024-11-26 17:59:45.876834] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:04.276 92.14 IOPS, 276.43 MiB/s [2024-11-26T17:59:46.139Z] [2024-11-26 17:59:45.976622] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:04.276 [2024-11-26 17:59:45.981737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.844 "name": "raid_bdev1", 00:15:04.844 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:04.844 "strip_size_kb": 0, 00:15:04.844 "state": "online", 00:15:04.844 "raid_level": "raid1", 00:15:04.844 "superblock": true, 00:15:04.844 "num_base_bdevs": 4, 00:15:04.844 "num_base_bdevs_discovered": 3, 00:15:04.844 "num_base_bdevs_operational": 3, 00:15:04.844 "base_bdevs_list": [ 00:15:04.844 { 00:15:04.844 "name": "spare", 00:15:04.844 "uuid": "8dcf881e-8927-5266-88e2-8526e5e52342", 00:15:04.844 "is_configured": true, 00:15:04.844 "data_offset": 2048, 00:15:04.844 "data_size": 63488 00:15:04.844 }, 00:15:04.844 { 00:15:04.844 "name": null, 00:15:04.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.844 "is_configured": false, 00:15:04.844 "data_offset": 0, 00:15:04.844 "data_size": 63488 00:15:04.844 }, 00:15:04.844 { 00:15:04.844 "name": "BaseBdev3", 00:15:04.844 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:04.844 "is_configured": true, 00:15:04.844 "data_offset": 2048, 00:15:04.844 "data_size": 63488 00:15:04.844 }, 00:15:04.844 { 00:15:04.844 "name": "BaseBdev4", 00:15:04.844 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:04.844 "is_configured": true, 00:15:04.844 "data_offset": 2048, 00:15:04.844 "data_size": 63488 00:15:04.844 } 00:15:04.844 ] 00:15:04.844 }' 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.844 "name": "raid_bdev1", 00:15:04.844 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:04.844 "strip_size_kb": 0, 00:15:04.844 "state": "online", 00:15:04.844 "raid_level": "raid1", 00:15:04.844 "superblock": true, 00:15:04.844 "num_base_bdevs": 4, 00:15:04.844 "num_base_bdevs_discovered": 3, 00:15:04.844 "num_base_bdevs_operational": 3, 00:15:04.844 "base_bdevs_list": [ 00:15:04.844 { 00:15:04.844 "name": "spare", 00:15:04.844 "uuid": "8dcf881e-8927-5266-88e2-8526e5e52342", 00:15:04.844 "is_configured": true, 00:15:04.844 "data_offset": 2048, 00:15:04.844 "data_size": 63488 00:15:04.844 }, 00:15:04.844 { 00:15:04.844 "name": null, 00:15:04.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.844 "is_configured": false, 00:15:04.844 "data_offset": 0, 00:15:04.844 "data_size": 63488 00:15:04.844 }, 00:15:04.844 { 00:15:04.844 "name": "BaseBdev3", 00:15:04.844 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:04.844 "is_configured": true, 00:15:04.844 "data_offset": 2048, 00:15:04.844 "data_size": 63488 00:15:04.844 }, 00:15:04.844 { 00:15:04.844 "name": "BaseBdev4", 00:15:04.844 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:04.844 "is_configured": true, 00:15:04.844 "data_offset": 2048, 00:15:04.844 "data_size": 63488 00:15:04.844 } 00:15:04.844 ] 00:15:04.844 }' 00:15:04.844 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.103 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.104 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.104 "name": "raid_bdev1", 00:15:05.104 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:05.104 "strip_size_kb": 0, 00:15:05.104 "state": "online", 00:15:05.104 "raid_level": "raid1", 00:15:05.104 "superblock": true, 00:15:05.104 "num_base_bdevs": 4, 00:15:05.104 "num_base_bdevs_discovered": 3, 00:15:05.104 "num_base_bdevs_operational": 3, 00:15:05.104 "base_bdevs_list": [ 00:15:05.104 { 00:15:05.104 "name": "spare", 00:15:05.104 "uuid": "8dcf881e-8927-5266-88e2-8526e5e52342", 00:15:05.104 "is_configured": true, 00:15:05.104 "data_offset": 2048, 00:15:05.104 "data_size": 63488 00:15:05.104 }, 00:15:05.104 { 00:15:05.104 "name": null, 00:15:05.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.104 "is_configured": false, 00:15:05.104 "data_offset": 0, 00:15:05.104 "data_size": 63488 00:15:05.104 }, 00:15:05.104 { 00:15:05.104 "name": "BaseBdev3", 00:15:05.104 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:05.104 "is_configured": true, 00:15:05.104 "data_offset": 2048, 00:15:05.104 "data_size": 63488 00:15:05.104 }, 00:15:05.104 { 00:15:05.104 "name": "BaseBdev4", 00:15:05.104 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:05.104 "is_configured": true, 00:15:05.104 "data_offset": 2048, 00:15:05.104 "data_size": 63488 00:15:05.104 } 00:15:05.104 ] 00:15:05.104 }' 00:15:05.104 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.104 17:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.622 85.50 IOPS, 256.50 MiB/s [2024-11-26T17:59:47.485Z] 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.622 [2024-11-26 17:59:47.257366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.622 [2024-11-26 17:59:47.257419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.622 00:15:05.622 Latency(us) 00:15:05.622 [2024-11-26T17:59:47.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.622 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:05.622 raid_bdev1 : 8.38 83.33 249.99 0.00 0.00 17705.38 307.65 112183.90 00:15:05.622 [2024-11-26T17:59:47.485Z] =================================================================================================================== 00:15:05.622 [2024-11-26T17:59:47.485Z] Total : 83.33 249.99 0.00 0.00 17705.38 307.65 112183.90 00:15:05.622 [2024-11-26 17:59:47.349196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.622 [2024-11-26 17:59:47.349340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.622 [2024-11-26 17:59:47.349484] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.622 [2024-11-26 17:59:47.349541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:05.622 { 00:15:05.622 "results": [ 00:15:05.622 { 00:15:05.622 "job": "raid_bdev1", 00:15:05.622 "core_mask": "0x1", 00:15:05.622 "workload": "randrw", 00:15:05.622 "percentage": 50, 00:15:05.622 "status": "finished", 00:15:05.622 "queue_depth": 2, 00:15:05.622 "io_size": 3145728, 00:15:05.622 "runtime": 8.376212, 00:15:05.622 "iops": 83.33122418582529, 00:15:05.622 "mibps": 249.99367255747586, 00:15:05.622 "io_failed": 0, 00:15:05.622 "io_timeout": 0, 00:15:05.622 "avg_latency_us": 17705.38294565884, 00:15:05.622 "min_latency_us": 307.6471615720524, 00:15:05.622 "max_latency_us": 112183.89519650655 00:15:05.622 } 00:15:05.622 ], 00:15:05.622 "core_count": 1 00:15:05.622 } 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:05.622 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:05.881 /dev/nbd0 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:05.881 1+0 records in 00:15:05.881 1+0 records out 00:15:05.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584526 s, 7.0 MB/s 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:05.881 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:06.140 /dev/nbd1 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.140 1+0 records in 00:15:06.140 1+0 records out 00:15:06.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245541 s, 16.7 MB/s 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:06.140 17:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:06.399 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:06.399 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:06.399 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:06.399 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:06.399 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:06.399 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.399 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:06.658 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:06.916 /dev/nbd1 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.916 1+0 records in 00:15:06.916 1+0 records out 00:15:06.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428135 s, 9.6 MB/s 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:06.916 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:07.175 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:07.175 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.175 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:07.175 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:07.175 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:07.175 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.175 17:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:07.433 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:07.433 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:07.433 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:07.433 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.433 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.433 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:07.433 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:07.433 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.433 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:07.433 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.433 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:07.433 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:07.433 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:07.433 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.433 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.691 [2024-11-26 17:59:49.372179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:07.691 [2024-11-26 17:59:49.372252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.691 [2024-11-26 17:59:49.372278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:07.691 [2024-11-26 17:59:49.372290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.691 [2024-11-26 17:59:49.374727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.691 [2024-11-26 17:59:49.374770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:07.691 [2024-11-26 17:59:49.374884] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:07.691 [2024-11-26 17:59:49.374948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:07.691 [2024-11-26 17:59:49.375123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:07.691 [2024-11-26 17:59:49.375243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:07.691 spare 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.691 [2024-11-26 17:59:49.475180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:07.691 [2024-11-26 17:59:49.475226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:07.691 [2024-11-26 17:59:49.475590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:07.691 [2024-11-26 17:59:49.475812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:07.691 [2024-11-26 17:59:49.475826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:07.691 [2024-11-26 17:59:49.476100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.691 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.691 "name": "raid_bdev1", 00:15:07.691 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:07.691 "strip_size_kb": 0, 00:15:07.691 "state": "online", 00:15:07.691 "raid_level": "raid1", 00:15:07.691 "superblock": true, 00:15:07.691 "num_base_bdevs": 4, 00:15:07.691 "num_base_bdevs_discovered": 3, 00:15:07.691 "num_base_bdevs_operational": 3, 00:15:07.691 "base_bdevs_list": [ 00:15:07.691 { 00:15:07.691 "name": "spare", 00:15:07.691 "uuid": "8dcf881e-8927-5266-88e2-8526e5e52342", 00:15:07.691 "is_configured": true, 00:15:07.692 "data_offset": 2048, 00:15:07.692 "data_size": 63488 00:15:07.692 }, 00:15:07.692 { 00:15:07.692 "name": null, 00:15:07.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.692 "is_configured": false, 00:15:07.692 "data_offset": 2048, 00:15:07.692 "data_size": 63488 00:15:07.692 }, 00:15:07.692 { 00:15:07.692 "name": "BaseBdev3", 00:15:07.692 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:07.692 "is_configured": true, 00:15:07.692 "data_offset": 2048, 00:15:07.692 "data_size": 63488 00:15:07.692 }, 00:15:07.692 { 00:15:07.692 "name": "BaseBdev4", 00:15:07.692 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:07.692 "is_configured": true, 00:15:07.692 "data_offset": 2048, 00:15:07.692 "data_size": 63488 00:15:07.692 } 00:15:07.692 ] 00:15:07.692 }' 00:15:07.692 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.692 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.257 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.257 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.257 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.257 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.257 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.257 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.257 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.258 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.258 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.258 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.258 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.258 "name": "raid_bdev1", 00:15:08.258 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:08.258 "strip_size_kb": 0, 00:15:08.258 "state": "online", 00:15:08.258 "raid_level": "raid1", 00:15:08.258 "superblock": true, 00:15:08.258 "num_base_bdevs": 4, 00:15:08.258 "num_base_bdevs_discovered": 3, 00:15:08.258 "num_base_bdevs_operational": 3, 00:15:08.258 "base_bdevs_list": [ 00:15:08.258 { 00:15:08.258 "name": "spare", 00:15:08.258 "uuid": "8dcf881e-8927-5266-88e2-8526e5e52342", 00:15:08.258 "is_configured": true, 00:15:08.258 "data_offset": 2048, 00:15:08.258 "data_size": 63488 00:15:08.258 }, 00:15:08.258 { 00:15:08.258 "name": null, 00:15:08.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.258 "is_configured": false, 00:15:08.258 "data_offset": 2048, 00:15:08.258 "data_size": 63488 00:15:08.258 }, 00:15:08.258 { 00:15:08.258 "name": "BaseBdev3", 00:15:08.258 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:08.258 "is_configured": true, 00:15:08.258 "data_offset": 2048, 00:15:08.258 "data_size": 63488 00:15:08.258 }, 00:15:08.258 { 00:15:08.258 "name": "BaseBdev4", 00:15:08.258 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:08.258 "is_configured": true, 00:15:08.258 "data_offset": 2048, 00:15:08.258 "data_size": 63488 00:15:08.258 } 00:15:08.258 ] 00:15:08.258 }' 00:15:08.258 17:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.258 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.258 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.258 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.258 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.258 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.258 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.258 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:08.258 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.517 [2024-11-26 17:59:50.163188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.517 "name": "raid_bdev1", 00:15:08.517 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:08.517 "strip_size_kb": 0, 00:15:08.517 "state": "online", 00:15:08.517 "raid_level": "raid1", 00:15:08.517 "superblock": true, 00:15:08.517 "num_base_bdevs": 4, 00:15:08.517 "num_base_bdevs_discovered": 2, 00:15:08.517 "num_base_bdevs_operational": 2, 00:15:08.517 "base_bdevs_list": [ 00:15:08.517 { 00:15:08.517 "name": null, 00:15:08.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.517 "is_configured": false, 00:15:08.517 "data_offset": 0, 00:15:08.517 "data_size": 63488 00:15:08.517 }, 00:15:08.517 { 00:15:08.517 "name": null, 00:15:08.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.517 "is_configured": false, 00:15:08.517 "data_offset": 2048, 00:15:08.517 "data_size": 63488 00:15:08.517 }, 00:15:08.517 { 00:15:08.517 "name": "BaseBdev3", 00:15:08.517 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:08.517 "is_configured": true, 00:15:08.517 "data_offset": 2048, 00:15:08.517 "data_size": 63488 00:15:08.517 }, 00:15:08.517 { 00:15:08.517 "name": "BaseBdev4", 00:15:08.517 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:08.517 "is_configured": true, 00:15:08.517 "data_offset": 2048, 00:15:08.517 "data_size": 63488 00:15:08.517 } 00:15:08.517 ] 00:15:08.517 }' 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.517 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.775 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:08.775 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.775 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.033 [2024-11-26 17:59:50.642460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.033 [2024-11-26 17:59:50.642717] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:09.033 [2024-11-26 17:59:50.642746] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:09.033 [2024-11-26 17:59:50.642791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.033 [2024-11-26 17:59:50.660305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:09.033 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.033 17:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:09.033 [2024-11-26 17:59:50.662535] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.970 "name": "raid_bdev1", 00:15:09.970 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:09.970 "strip_size_kb": 0, 00:15:09.970 "state": "online", 00:15:09.970 "raid_level": "raid1", 00:15:09.970 "superblock": true, 00:15:09.970 "num_base_bdevs": 4, 00:15:09.970 "num_base_bdevs_discovered": 3, 00:15:09.970 "num_base_bdevs_operational": 3, 00:15:09.970 "process": { 00:15:09.970 "type": "rebuild", 00:15:09.970 "target": "spare", 00:15:09.970 "progress": { 00:15:09.970 "blocks": 20480, 00:15:09.970 "percent": 32 00:15:09.970 } 00:15:09.970 }, 00:15:09.970 "base_bdevs_list": [ 00:15:09.970 { 00:15:09.970 "name": "spare", 00:15:09.970 "uuid": "8dcf881e-8927-5266-88e2-8526e5e52342", 00:15:09.970 "is_configured": true, 00:15:09.970 "data_offset": 2048, 00:15:09.970 "data_size": 63488 00:15:09.970 }, 00:15:09.970 { 00:15:09.970 "name": null, 00:15:09.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.970 "is_configured": false, 00:15:09.970 "data_offset": 2048, 00:15:09.970 "data_size": 63488 00:15:09.970 }, 00:15:09.970 { 00:15:09.970 "name": "BaseBdev3", 00:15:09.970 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:09.970 "is_configured": true, 00:15:09.970 "data_offset": 2048, 00:15:09.970 "data_size": 63488 00:15:09.970 }, 00:15:09.970 { 00:15:09.970 "name": "BaseBdev4", 00:15:09.970 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:09.970 "is_configured": true, 00:15:09.970 "data_offset": 2048, 00:15:09.970 "data_size": 63488 00:15:09.970 } 00:15:09.970 ] 00:15:09.970 }' 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.970 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.970 [2024-11-26 17:59:51.801872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.230 [2024-11-26 17:59:51.868731] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:10.230 [2024-11-26 17:59:51.868801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.230 [2024-11-26 17:59:51.868819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.230 [2024-11-26 17:59:51.868829] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.230 "name": "raid_bdev1", 00:15:10.230 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:10.230 "strip_size_kb": 0, 00:15:10.230 "state": "online", 00:15:10.230 "raid_level": "raid1", 00:15:10.230 "superblock": true, 00:15:10.230 "num_base_bdevs": 4, 00:15:10.230 "num_base_bdevs_discovered": 2, 00:15:10.230 "num_base_bdevs_operational": 2, 00:15:10.230 "base_bdevs_list": [ 00:15:10.230 { 00:15:10.230 "name": null, 00:15:10.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.230 "is_configured": false, 00:15:10.230 "data_offset": 0, 00:15:10.230 "data_size": 63488 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "name": null, 00:15:10.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.230 "is_configured": false, 00:15:10.230 "data_offset": 2048, 00:15:10.230 "data_size": 63488 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "name": "BaseBdev3", 00:15:10.230 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:10.230 "is_configured": true, 00:15:10.230 "data_offset": 2048, 00:15:10.230 "data_size": 63488 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "name": "BaseBdev4", 00:15:10.230 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:10.230 "is_configured": true, 00:15:10.230 "data_offset": 2048, 00:15:10.230 "data_size": 63488 00:15:10.230 } 00:15:10.230 ] 00:15:10.230 }' 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.230 17:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.489 17:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:10.489 17:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.489 17:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.489 [2024-11-26 17:59:52.347389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:10.489 [2024-11-26 17:59:52.347488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.489 [2024-11-26 17:59:52.347527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:10.489 [2024-11-26 17:59:52.347543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.489 [2024-11-26 17:59:52.348102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.489 [2024-11-26 17:59:52.348145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:10.489 [2024-11-26 17:59:52.348259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:10.489 [2024-11-26 17:59:52.348285] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:10.489 [2024-11-26 17:59:52.348298] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:10.489 [2024-11-26 17:59:52.348330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.748 [2024-11-26 17:59:52.365943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:10.748 spare 00:15:10.748 17:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.748 17:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:10.748 [2024-11-26 17:59:52.368264] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.685 "name": "raid_bdev1", 00:15:11.685 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:11.685 "strip_size_kb": 0, 00:15:11.685 "state": "online", 00:15:11.685 "raid_level": "raid1", 00:15:11.685 "superblock": true, 00:15:11.685 "num_base_bdevs": 4, 00:15:11.685 "num_base_bdevs_discovered": 3, 00:15:11.685 "num_base_bdevs_operational": 3, 00:15:11.685 "process": { 00:15:11.685 "type": "rebuild", 00:15:11.685 "target": "spare", 00:15:11.685 "progress": { 00:15:11.685 "blocks": 20480, 00:15:11.685 "percent": 32 00:15:11.685 } 00:15:11.685 }, 00:15:11.685 "base_bdevs_list": [ 00:15:11.685 { 00:15:11.685 "name": "spare", 00:15:11.685 "uuid": "8dcf881e-8927-5266-88e2-8526e5e52342", 00:15:11.685 "is_configured": true, 00:15:11.685 "data_offset": 2048, 00:15:11.685 "data_size": 63488 00:15:11.685 }, 00:15:11.685 { 00:15:11.685 "name": null, 00:15:11.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.685 "is_configured": false, 00:15:11.685 "data_offset": 2048, 00:15:11.685 "data_size": 63488 00:15:11.685 }, 00:15:11.685 { 00:15:11.685 "name": "BaseBdev3", 00:15:11.685 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:11.685 "is_configured": true, 00:15:11.685 "data_offset": 2048, 00:15:11.685 "data_size": 63488 00:15:11.685 }, 00:15:11.685 { 00:15:11.685 "name": "BaseBdev4", 00:15:11.685 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:11.685 "is_configured": true, 00:15:11.685 "data_offset": 2048, 00:15:11.685 "data_size": 63488 00:15:11.685 } 00:15:11.685 ] 00:15:11.685 }' 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.685 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.685 [2024-11-26 17:59:53.535486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.944 [2024-11-26 17:59:53.574599] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:11.944 [2024-11-26 17:59:53.574668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.944 [2024-11-26 17:59:53.574687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.944 [2024-11-26 17:59:53.574694] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:11.944 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.944 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:11.944 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.945 "name": "raid_bdev1", 00:15:11.945 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:11.945 "strip_size_kb": 0, 00:15:11.945 "state": "online", 00:15:11.945 "raid_level": "raid1", 00:15:11.945 "superblock": true, 00:15:11.945 "num_base_bdevs": 4, 00:15:11.945 "num_base_bdevs_discovered": 2, 00:15:11.945 "num_base_bdevs_operational": 2, 00:15:11.945 "base_bdevs_list": [ 00:15:11.945 { 00:15:11.945 "name": null, 00:15:11.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.945 "is_configured": false, 00:15:11.945 "data_offset": 0, 00:15:11.945 "data_size": 63488 00:15:11.945 }, 00:15:11.945 { 00:15:11.945 "name": null, 00:15:11.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.945 "is_configured": false, 00:15:11.945 "data_offset": 2048, 00:15:11.945 "data_size": 63488 00:15:11.945 }, 00:15:11.945 { 00:15:11.945 "name": "BaseBdev3", 00:15:11.945 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:11.945 "is_configured": true, 00:15:11.945 "data_offset": 2048, 00:15:11.945 "data_size": 63488 00:15:11.945 }, 00:15:11.945 { 00:15:11.945 "name": "BaseBdev4", 00:15:11.945 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:11.945 "is_configured": true, 00:15:11.945 "data_offset": 2048, 00:15:11.945 "data_size": 63488 00:15:11.945 } 00:15:11.945 ] 00:15:11.945 }' 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.945 17:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.512 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.512 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.512 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.512 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.512 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.512 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.512 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.512 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.512 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.512 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.512 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.512 "name": "raid_bdev1", 00:15:12.512 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:12.512 "strip_size_kb": 0, 00:15:12.512 "state": "online", 00:15:12.512 "raid_level": "raid1", 00:15:12.512 "superblock": true, 00:15:12.512 "num_base_bdevs": 4, 00:15:12.512 "num_base_bdevs_discovered": 2, 00:15:12.512 "num_base_bdevs_operational": 2, 00:15:12.512 "base_bdevs_list": [ 00:15:12.512 { 00:15:12.512 "name": null, 00:15:12.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.512 "is_configured": false, 00:15:12.512 "data_offset": 0, 00:15:12.512 "data_size": 63488 00:15:12.512 }, 00:15:12.512 { 00:15:12.512 "name": null, 00:15:12.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.512 "is_configured": false, 00:15:12.512 "data_offset": 2048, 00:15:12.512 "data_size": 63488 00:15:12.512 }, 00:15:12.512 { 00:15:12.512 "name": "BaseBdev3", 00:15:12.513 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:12.513 "is_configured": true, 00:15:12.513 "data_offset": 2048, 00:15:12.513 "data_size": 63488 00:15:12.513 }, 00:15:12.513 { 00:15:12.513 "name": "BaseBdev4", 00:15:12.513 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:12.513 "is_configured": true, 00:15:12.513 "data_offset": 2048, 00:15:12.513 "data_size": 63488 00:15:12.513 } 00:15:12.513 ] 00:15:12.513 }' 00:15:12.513 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.513 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.513 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.513 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.513 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:12.513 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.513 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.513 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.513 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:12.513 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.513 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.513 [2024-11-26 17:59:54.214075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:12.513 [2024-11-26 17:59:54.214176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.513 [2024-11-26 17:59:54.214208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:12.513 [2024-11-26 17:59:54.214219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.513 [2024-11-26 17:59:54.214773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.513 [2024-11-26 17:59:54.214808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:12.513 [2024-11-26 17:59:54.214917] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:12.513 [2024-11-26 17:59:54.214942] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:12.513 [2024-11-26 17:59:54.214955] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:12.513 [2024-11-26 17:59:54.214967] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:12.513 BaseBdev1 00:15:12.513 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.513 17:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:13.450 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:13.450 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.450 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.450 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.450 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.450 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.450 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.450 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.450 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.450 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.450 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.450 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.450 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.450 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.450 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.451 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.451 "name": "raid_bdev1", 00:15:13.451 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:13.451 "strip_size_kb": 0, 00:15:13.451 "state": "online", 00:15:13.451 "raid_level": "raid1", 00:15:13.451 "superblock": true, 00:15:13.451 "num_base_bdevs": 4, 00:15:13.451 "num_base_bdevs_discovered": 2, 00:15:13.451 "num_base_bdevs_operational": 2, 00:15:13.451 "base_bdevs_list": [ 00:15:13.451 { 00:15:13.451 "name": null, 00:15:13.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.451 "is_configured": false, 00:15:13.451 "data_offset": 0, 00:15:13.451 "data_size": 63488 00:15:13.451 }, 00:15:13.451 { 00:15:13.451 "name": null, 00:15:13.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.451 "is_configured": false, 00:15:13.451 "data_offset": 2048, 00:15:13.451 "data_size": 63488 00:15:13.451 }, 00:15:13.451 { 00:15:13.451 "name": "BaseBdev3", 00:15:13.451 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:13.451 "is_configured": true, 00:15:13.451 "data_offset": 2048, 00:15:13.451 "data_size": 63488 00:15:13.451 }, 00:15:13.451 { 00:15:13.451 "name": "BaseBdev4", 00:15:13.451 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:13.451 "is_configured": true, 00:15:13.451 "data_offset": 2048, 00:15:13.451 "data_size": 63488 00:15:13.451 } 00:15:13.451 ] 00:15:13.451 }' 00:15:13.451 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.451 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.019 "name": "raid_bdev1", 00:15:14.019 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:14.019 "strip_size_kb": 0, 00:15:14.019 "state": "online", 00:15:14.019 "raid_level": "raid1", 00:15:14.019 "superblock": true, 00:15:14.019 "num_base_bdevs": 4, 00:15:14.019 "num_base_bdevs_discovered": 2, 00:15:14.019 "num_base_bdevs_operational": 2, 00:15:14.019 "base_bdevs_list": [ 00:15:14.019 { 00:15:14.019 "name": null, 00:15:14.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.019 "is_configured": false, 00:15:14.019 "data_offset": 0, 00:15:14.019 "data_size": 63488 00:15:14.019 }, 00:15:14.019 { 00:15:14.019 "name": null, 00:15:14.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.019 "is_configured": false, 00:15:14.019 "data_offset": 2048, 00:15:14.019 "data_size": 63488 00:15:14.019 }, 00:15:14.019 { 00:15:14.019 "name": "BaseBdev3", 00:15:14.019 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:14.019 "is_configured": true, 00:15:14.019 "data_offset": 2048, 00:15:14.019 "data_size": 63488 00:15:14.019 }, 00:15:14.019 { 00:15:14.019 "name": "BaseBdev4", 00:15:14.019 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:14.019 "is_configured": true, 00:15:14.019 "data_offset": 2048, 00:15:14.019 "data_size": 63488 00:15:14.019 } 00:15:14.019 ] 00:15:14.019 }' 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.019 [2024-11-26 17:59:55.835858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.019 [2024-11-26 17:59:55.836100] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:14.019 [2024-11-26 17:59:55.836120] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:14.019 request: 00:15:14.019 { 00:15:14.019 "base_bdev": "BaseBdev1", 00:15:14.019 "raid_bdev": "raid_bdev1", 00:15:14.019 "method": "bdev_raid_add_base_bdev", 00:15:14.019 "req_id": 1 00:15:14.019 } 00:15:14.019 Got JSON-RPC error response 00:15:14.019 response: 00:15:14.019 { 00:15:14.019 "code": -22, 00:15:14.019 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:14.019 } 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:14.019 17:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:15.018 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:15.018 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.018 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.018 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.018 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.018 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.018 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.018 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.018 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.018 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.018 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.018 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.018 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.018 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.018 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.277 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.277 "name": "raid_bdev1", 00:15:15.277 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:15.277 "strip_size_kb": 0, 00:15:15.277 "state": "online", 00:15:15.277 "raid_level": "raid1", 00:15:15.277 "superblock": true, 00:15:15.277 "num_base_bdevs": 4, 00:15:15.277 "num_base_bdevs_discovered": 2, 00:15:15.277 "num_base_bdevs_operational": 2, 00:15:15.277 "base_bdevs_list": [ 00:15:15.277 { 00:15:15.277 "name": null, 00:15:15.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.277 "is_configured": false, 00:15:15.277 "data_offset": 0, 00:15:15.277 "data_size": 63488 00:15:15.277 }, 00:15:15.277 { 00:15:15.277 "name": null, 00:15:15.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.277 "is_configured": false, 00:15:15.277 "data_offset": 2048, 00:15:15.277 "data_size": 63488 00:15:15.277 }, 00:15:15.277 { 00:15:15.277 "name": "BaseBdev3", 00:15:15.277 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:15.277 "is_configured": true, 00:15:15.277 "data_offset": 2048, 00:15:15.277 "data_size": 63488 00:15:15.277 }, 00:15:15.277 { 00:15:15.277 "name": "BaseBdev4", 00:15:15.277 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:15.277 "is_configured": true, 00:15:15.277 "data_offset": 2048, 00:15:15.277 "data_size": 63488 00:15:15.277 } 00:15:15.277 ] 00:15:15.277 }' 00:15:15.277 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.277 17:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.536 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.536 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.536 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.536 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.536 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.536 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.536 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.536 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.536 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.536 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.536 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.536 "name": "raid_bdev1", 00:15:15.536 "uuid": "4cae69dc-c898-413a-bfc3-9690e08164fe", 00:15:15.536 "strip_size_kb": 0, 00:15:15.536 "state": "online", 00:15:15.536 "raid_level": "raid1", 00:15:15.536 "superblock": true, 00:15:15.536 "num_base_bdevs": 4, 00:15:15.536 "num_base_bdevs_discovered": 2, 00:15:15.536 "num_base_bdevs_operational": 2, 00:15:15.536 "base_bdevs_list": [ 00:15:15.536 { 00:15:15.536 "name": null, 00:15:15.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.536 "is_configured": false, 00:15:15.536 "data_offset": 0, 00:15:15.536 "data_size": 63488 00:15:15.536 }, 00:15:15.536 { 00:15:15.536 "name": null, 00:15:15.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.536 "is_configured": false, 00:15:15.536 "data_offset": 2048, 00:15:15.536 "data_size": 63488 00:15:15.536 }, 00:15:15.536 { 00:15:15.536 "name": "BaseBdev3", 00:15:15.536 "uuid": "d5f07b63-3b3d-56e4-bc8f-08d0d655e8cf", 00:15:15.536 "is_configured": true, 00:15:15.536 "data_offset": 2048, 00:15:15.536 "data_size": 63488 00:15:15.536 }, 00:15:15.536 { 00:15:15.536 "name": "BaseBdev4", 00:15:15.536 "uuid": "7809430f-8b7e-5c1c-9214-692adb7c2bdb", 00:15:15.536 "is_configured": true, 00:15:15.536 "data_offset": 2048, 00:15:15.536 "data_size": 63488 00:15:15.536 } 00:15:15.536 ] 00:15:15.536 }' 00:15:15.536 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.795 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.795 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.795 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.795 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79536 00:15:15.795 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79536 ']' 00:15:15.795 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79536 00:15:15.795 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:15.795 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.795 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79536 00:15:15.795 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.795 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.795 killing process with pid 79536 00:15:15.795 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79536' 00:15:15.795 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79536 00:15:15.795 Received shutdown signal, test time was about 18.593492 seconds 00:15:15.795 00:15:15.795 Latency(us) 00:15:15.795 [2024-11-26T17:59:57.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.795 [2024-11-26T17:59:57.658Z] =================================================================================================================== 00:15:15.795 [2024-11-26T17:59:57.658Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:15.795 [2024-11-26 17:59:57.522799] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:15.795 17:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79536 00:15:15.795 [2024-11-26 17:59:57.522950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.795 [2024-11-26 17:59:57.523045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.795 [2024-11-26 17:59:57.523061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:16.363 [2024-11-26 17:59:58.011551] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.742 17:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:17.742 00:15:17.742 real 0m22.307s 00:15:17.742 user 0m29.224s 00:15:17.742 sys 0m2.643s 00:15:17.742 17:59:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.742 17:59:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.742 ************************************ 00:15:17.742 END TEST raid_rebuild_test_sb_io 00:15:17.742 ************************************ 00:15:17.742 17:59:59 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:17.742 17:59:59 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:17.742 17:59:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:17.742 17:59:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.742 17:59:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.742 ************************************ 00:15:17.742 START TEST raid5f_state_function_test 00:15:17.742 ************************************ 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80267 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80267' 00:15:17.742 Process raid pid: 80267 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80267 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80267 ']' 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.742 17:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.742 [2024-11-26 17:59:59.487514] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:15:17.742 [2024-11-26 17:59:59.487637] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.001 [2024-11-26 17:59:59.647643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.001 [2024-11-26 17:59:59.772375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.261 [2024-11-26 17:59:59.985912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.261 [2024-11-26 17:59:59.985966] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.520 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.521 [2024-11-26 18:00:00.358992] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.521 [2024-11-26 18:00:00.359065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.521 [2024-11-26 18:00:00.359078] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.521 [2024-11-26 18:00:00.359089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.521 [2024-11-26 18:00:00.359096] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:18.521 [2024-11-26 18:00:00.359107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.521 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.780 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.781 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.781 "name": "Existed_Raid", 00:15:18.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.781 "strip_size_kb": 64, 00:15:18.781 "state": "configuring", 00:15:18.781 "raid_level": "raid5f", 00:15:18.781 "superblock": false, 00:15:18.781 "num_base_bdevs": 3, 00:15:18.781 "num_base_bdevs_discovered": 0, 00:15:18.781 "num_base_bdevs_operational": 3, 00:15:18.781 "base_bdevs_list": [ 00:15:18.781 { 00:15:18.781 "name": "BaseBdev1", 00:15:18.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.781 "is_configured": false, 00:15:18.781 "data_offset": 0, 00:15:18.781 "data_size": 0 00:15:18.781 }, 00:15:18.781 { 00:15:18.781 "name": "BaseBdev2", 00:15:18.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.781 "is_configured": false, 00:15:18.781 "data_offset": 0, 00:15:18.781 "data_size": 0 00:15:18.781 }, 00:15:18.781 { 00:15:18.781 "name": "BaseBdev3", 00:15:18.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.781 "is_configured": false, 00:15:18.781 "data_offset": 0, 00:15:18.781 "data_size": 0 00:15:18.781 } 00:15:18.781 ] 00:15:18.781 }' 00:15:18.781 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.781 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.040 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:19.040 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.040 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.040 [2024-11-26 18:00:00.782217] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.040 [2024-11-26 18:00:00.782267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:19.040 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.040 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:19.040 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.040 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.040 [2024-11-26 18:00:00.790225] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.040 [2024-11-26 18:00:00.790283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.040 [2024-11-26 18:00:00.790293] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.040 [2024-11-26 18:00:00.790304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.040 [2024-11-26 18:00:00.790311] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.040 [2024-11-26 18:00:00.790321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.040 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.041 [2024-11-26 18:00:00.837675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.041 BaseBdev1 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.041 [ 00:15:19.041 { 00:15:19.041 "name": "BaseBdev1", 00:15:19.041 "aliases": [ 00:15:19.041 "323c138d-97a8-47c2-8794-9b69ae4ec836" 00:15:19.041 ], 00:15:19.041 "product_name": "Malloc disk", 00:15:19.041 "block_size": 512, 00:15:19.041 "num_blocks": 65536, 00:15:19.041 "uuid": "323c138d-97a8-47c2-8794-9b69ae4ec836", 00:15:19.041 "assigned_rate_limits": { 00:15:19.041 "rw_ios_per_sec": 0, 00:15:19.041 "rw_mbytes_per_sec": 0, 00:15:19.041 "r_mbytes_per_sec": 0, 00:15:19.041 "w_mbytes_per_sec": 0 00:15:19.041 }, 00:15:19.041 "claimed": true, 00:15:19.041 "claim_type": "exclusive_write", 00:15:19.041 "zoned": false, 00:15:19.041 "supported_io_types": { 00:15:19.041 "read": true, 00:15:19.041 "write": true, 00:15:19.041 "unmap": true, 00:15:19.041 "flush": true, 00:15:19.041 "reset": true, 00:15:19.041 "nvme_admin": false, 00:15:19.041 "nvme_io": false, 00:15:19.041 "nvme_io_md": false, 00:15:19.041 "write_zeroes": true, 00:15:19.041 "zcopy": true, 00:15:19.041 "get_zone_info": false, 00:15:19.041 "zone_management": false, 00:15:19.041 "zone_append": false, 00:15:19.041 "compare": false, 00:15:19.041 "compare_and_write": false, 00:15:19.041 "abort": true, 00:15:19.041 "seek_hole": false, 00:15:19.041 "seek_data": false, 00:15:19.041 "copy": true, 00:15:19.041 "nvme_iov_md": false 00:15:19.041 }, 00:15:19.041 "memory_domains": [ 00:15:19.041 { 00:15:19.041 "dma_device_id": "system", 00:15:19.041 "dma_device_type": 1 00:15:19.041 }, 00:15:19.041 { 00:15:19.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.041 "dma_device_type": 2 00:15:19.041 } 00:15:19.041 ], 00:15:19.041 "driver_specific": {} 00:15:19.041 } 00:15:19.041 ] 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.041 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.300 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.300 "name": "Existed_Raid", 00:15:19.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.300 "strip_size_kb": 64, 00:15:19.300 "state": "configuring", 00:15:19.300 "raid_level": "raid5f", 00:15:19.300 "superblock": false, 00:15:19.300 "num_base_bdevs": 3, 00:15:19.300 "num_base_bdevs_discovered": 1, 00:15:19.300 "num_base_bdevs_operational": 3, 00:15:19.300 "base_bdevs_list": [ 00:15:19.300 { 00:15:19.300 "name": "BaseBdev1", 00:15:19.300 "uuid": "323c138d-97a8-47c2-8794-9b69ae4ec836", 00:15:19.300 "is_configured": true, 00:15:19.300 "data_offset": 0, 00:15:19.300 "data_size": 65536 00:15:19.300 }, 00:15:19.300 { 00:15:19.300 "name": "BaseBdev2", 00:15:19.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.301 "is_configured": false, 00:15:19.301 "data_offset": 0, 00:15:19.301 "data_size": 0 00:15:19.301 }, 00:15:19.301 { 00:15:19.301 "name": "BaseBdev3", 00:15:19.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.301 "is_configured": false, 00:15:19.301 "data_offset": 0, 00:15:19.301 "data_size": 0 00:15:19.301 } 00:15:19.301 ] 00:15:19.301 }' 00:15:19.301 18:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.301 18:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.559 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:19.559 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.559 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.559 [2024-11-26 18:00:01.348999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.559 [2024-11-26 18:00:01.349182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:19.559 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.559 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:19.559 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.559 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.559 [2024-11-26 18:00:01.361100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.559 [2024-11-26 18:00:01.363414] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.560 [2024-11-26 18:00:01.363534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.560 [2024-11-26 18:00:01.363574] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.560 [2024-11-26 18:00:01.363603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.560 "name": "Existed_Raid", 00:15:19.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.560 "strip_size_kb": 64, 00:15:19.560 "state": "configuring", 00:15:19.560 "raid_level": "raid5f", 00:15:19.560 "superblock": false, 00:15:19.560 "num_base_bdevs": 3, 00:15:19.560 "num_base_bdevs_discovered": 1, 00:15:19.560 "num_base_bdevs_operational": 3, 00:15:19.560 "base_bdevs_list": [ 00:15:19.560 { 00:15:19.560 "name": "BaseBdev1", 00:15:19.560 "uuid": "323c138d-97a8-47c2-8794-9b69ae4ec836", 00:15:19.560 "is_configured": true, 00:15:19.560 "data_offset": 0, 00:15:19.560 "data_size": 65536 00:15:19.560 }, 00:15:19.560 { 00:15:19.560 "name": "BaseBdev2", 00:15:19.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.560 "is_configured": false, 00:15:19.560 "data_offset": 0, 00:15:19.560 "data_size": 0 00:15:19.560 }, 00:15:19.560 { 00:15:19.560 "name": "BaseBdev3", 00:15:19.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.560 "is_configured": false, 00:15:19.560 "data_offset": 0, 00:15:19.560 "data_size": 0 00:15:19.560 } 00:15:19.560 ] 00:15:19.560 }' 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.560 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.128 [2024-11-26 18:00:01.914614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.128 BaseBdev2 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.128 [ 00:15:20.128 { 00:15:20.128 "name": "BaseBdev2", 00:15:20.128 "aliases": [ 00:15:20.128 "42fe54cc-06f4-44fa-8c5b-3d243bdac858" 00:15:20.128 ], 00:15:20.128 "product_name": "Malloc disk", 00:15:20.128 "block_size": 512, 00:15:20.128 "num_blocks": 65536, 00:15:20.128 "uuid": "42fe54cc-06f4-44fa-8c5b-3d243bdac858", 00:15:20.128 "assigned_rate_limits": { 00:15:20.128 "rw_ios_per_sec": 0, 00:15:20.128 "rw_mbytes_per_sec": 0, 00:15:20.128 "r_mbytes_per_sec": 0, 00:15:20.128 "w_mbytes_per_sec": 0 00:15:20.128 }, 00:15:20.128 "claimed": true, 00:15:20.128 "claim_type": "exclusive_write", 00:15:20.128 "zoned": false, 00:15:20.128 "supported_io_types": { 00:15:20.128 "read": true, 00:15:20.128 "write": true, 00:15:20.128 "unmap": true, 00:15:20.128 "flush": true, 00:15:20.128 "reset": true, 00:15:20.128 "nvme_admin": false, 00:15:20.128 "nvme_io": false, 00:15:20.128 "nvme_io_md": false, 00:15:20.128 "write_zeroes": true, 00:15:20.128 "zcopy": true, 00:15:20.128 "get_zone_info": false, 00:15:20.128 "zone_management": false, 00:15:20.128 "zone_append": false, 00:15:20.128 "compare": false, 00:15:20.128 "compare_and_write": false, 00:15:20.128 "abort": true, 00:15:20.128 "seek_hole": false, 00:15:20.128 "seek_data": false, 00:15:20.128 "copy": true, 00:15:20.128 "nvme_iov_md": false 00:15:20.128 }, 00:15:20.128 "memory_domains": [ 00:15:20.128 { 00:15:20.128 "dma_device_id": "system", 00:15:20.128 "dma_device_type": 1 00:15:20.128 }, 00:15:20.128 { 00:15:20.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.128 "dma_device_type": 2 00:15:20.128 } 00:15:20.128 ], 00:15:20.128 "driver_specific": {} 00:15:20.128 } 00:15:20.128 ] 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.128 18:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.387 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.387 "name": "Existed_Raid", 00:15:20.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.387 "strip_size_kb": 64, 00:15:20.387 "state": "configuring", 00:15:20.387 "raid_level": "raid5f", 00:15:20.387 "superblock": false, 00:15:20.387 "num_base_bdevs": 3, 00:15:20.387 "num_base_bdevs_discovered": 2, 00:15:20.387 "num_base_bdevs_operational": 3, 00:15:20.387 "base_bdevs_list": [ 00:15:20.387 { 00:15:20.387 "name": "BaseBdev1", 00:15:20.387 "uuid": "323c138d-97a8-47c2-8794-9b69ae4ec836", 00:15:20.387 "is_configured": true, 00:15:20.387 "data_offset": 0, 00:15:20.387 "data_size": 65536 00:15:20.387 }, 00:15:20.387 { 00:15:20.387 "name": "BaseBdev2", 00:15:20.387 "uuid": "42fe54cc-06f4-44fa-8c5b-3d243bdac858", 00:15:20.387 "is_configured": true, 00:15:20.387 "data_offset": 0, 00:15:20.387 "data_size": 65536 00:15:20.387 }, 00:15:20.387 { 00:15:20.387 "name": "BaseBdev3", 00:15:20.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.387 "is_configured": false, 00:15:20.387 "data_offset": 0, 00:15:20.387 "data_size": 0 00:15:20.387 } 00:15:20.387 ] 00:15:20.387 }' 00:15:20.387 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.387 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.647 [2024-11-26 18:00:02.482734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.647 [2024-11-26 18:00:02.482925] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:20.647 [2024-11-26 18:00:02.482947] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:20.647 [2024-11-26 18:00:02.483447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:20.647 [2024-11-26 18:00:02.490408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:20.647 [2024-11-26 18:00:02.490494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:20.647 [2024-11-26 18:00:02.490914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.647 BaseBdev3 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.647 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.905 [ 00:15:20.905 { 00:15:20.905 "name": "BaseBdev3", 00:15:20.905 "aliases": [ 00:15:20.905 "3aa139b6-b639-4af2-9540-97037157b88a" 00:15:20.905 ], 00:15:20.905 "product_name": "Malloc disk", 00:15:20.905 "block_size": 512, 00:15:20.905 "num_blocks": 65536, 00:15:20.905 "uuid": "3aa139b6-b639-4af2-9540-97037157b88a", 00:15:20.905 "assigned_rate_limits": { 00:15:20.905 "rw_ios_per_sec": 0, 00:15:20.906 "rw_mbytes_per_sec": 0, 00:15:20.906 "r_mbytes_per_sec": 0, 00:15:20.906 "w_mbytes_per_sec": 0 00:15:20.906 }, 00:15:20.906 "claimed": true, 00:15:20.906 "claim_type": "exclusive_write", 00:15:20.906 "zoned": false, 00:15:20.906 "supported_io_types": { 00:15:20.906 "read": true, 00:15:20.906 "write": true, 00:15:20.906 "unmap": true, 00:15:20.906 "flush": true, 00:15:20.906 "reset": true, 00:15:20.906 "nvme_admin": false, 00:15:20.906 "nvme_io": false, 00:15:20.906 "nvme_io_md": false, 00:15:20.906 "write_zeroes": true, 00:15:20.906 "zcopy": true, 00:15:20.906 "get_zone_info": false, 00:15:20.906 "zone_management": false, 00:15:20.906 "zone_append": false, 00:15:20.906 "compare": false, 00:15:20.906 "compare_and_write": false, 00:15:20.906 "abort": true, 00:15:20.906 "seek_hole": false, 00:15:20.906 "seek_data": false, 00:15:20.906 "copy": true, 00:15:20.906 "nvme_iov_md": false 00:15:20.906 }, 00:15:20.906 "memory_domains": [ 00:15:20.906 { 00:15:20.906 "dma_device_id": "system", 00:15:20.906 "dma_device_type": 1 00:15:20.906 }, 00:15:20.906 { 00:15:20.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.906 "dma_device_type": 2 00:15:20.906 } 00:15:20.906 ], 00:15:20.906 "driver_specific": {} 00:15:20.906 } 00:15:20.906 ] 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.906 "name": "Existed_Raid", 00:15:20.906 "uuid": "b4f327f1-5089-4605-9653-f57e59af088a", 00:15:20.906 "strip_size_kb": 64, 00:15:20.906 "state": "online", 00:15:20.906 "raid_level": "raid5f", 00:15:20.906 "superblock": false, 00:15:20.906 "num_base_bdevs": 3, 00:15:20.906 "num_base_bdevs_discovered": 3, 00:15:20.906 "num_base_bdevs_operational": 3, 00:15:20.906 "base_bdevs_list": [ 00:15:20.906 { 00:15:20.906 "name": "BaseBdev1", 00:15:20.906 "uuid": "323c138d-97a8-47c2-8794-9b69ae4ec836", 00:15:20.906 "is_configured": true, 00:15:20.906 "data_offset": 0, 00:15:20.906 "data_size": 65536 00:15:20.906 }, 00:15:20.906 { 00:15:20.906 "name": "BaseBdev2", 00:15:20.906 "uuid": "42fe54cc-06f4-44fa-8c5b-3d243bdac858", 00:15:20.906 "is_configured": true, 00:15:20.906 "data_offset": 0, 00:15:20.906 "data_size": 65536 00:15:20.906 }, 00:15:20.906 { 00:15:20.906 "name": "BaseBdev3", 00:15:20.906 "uuid": "3aa139b6-b639-4af2-9540-97037157b88a", 00:15:20.906 "is_configured": true, 00:15:20.906 "data_offset": 0, 00:15:20.906 "data_size": 65536 00:15:20.906 } 00:15:20.906 ] 00:15:20.906 }' 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.906 18:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.165 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:21.165 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:21.165 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:21.165 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:21.165 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:21.165 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:21.166 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:21.166 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:21.166 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.166 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.166 [2024-11-26 18:00:03.018038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:21.425 "name": "Existed_Raid", 00:15:21.425 "aliases": [ 00:15:21.425 "b4f327f1-5089-4605-9653-f57e59af088a" 00:15:21.425 ], 00:15:21.425 "product_name": "Raid Volume", 00:15:21.425 "block_size": 512, 00:15:21.425 "num_blocks": 131072, 00:15:21.425 "uuid": "b4f327f1-5089-4605-9653-f57e59af088a", 00:15:21.425 "assigned_rate_limits": { 00:15:21.425 "rw_ios_per_sec": 0, 00:15:21.425 "rw_mbytes_per_sec": 0, 00:15:21.425 "r_mbytes_per_sec": 0, 00:15:21.425 "w_mbytes_per_sec": 0 00:15:21.425 }, 00:15:21.425 "claimed": false, 00:15:21.425 "zoned": false, 00:15:21.425 "supported_io_types": { 00:15:21.425 "read": true, 00:15:21.425 "write": true, 00:15:21.425 "unmap": false, 00:15:21.425 "flush": false, 00:15:21.425 "reset": true, 00:15:21.425 "nvme_admin": false, 00:15:21.425 "nvme_io": false, 00:15:21.425 "nvme_io_md": false, 00:15:21.425 "write_zeroes": true, 00:15:21.425 "zcopy": false, 00:15:21.425 "get_zone_info": false, 00:15:21.425 "zone_management": false, 00:15:21.425 "zone_append": false, 00:15:21.425 "compare": false, 00:15:21.425 "compare_and_write": false, 00:15:21.425 "abort": false, 00:15:21.425 "seek_hole": false, 00:15:21.425 "seek_data": false, 00:15:21.425 "copy": false, 00:15:21.425 "nvme_iov_md": false 00:15:21.425 }, 00:15:21.425 "driver_specific": { 00:15:21.425 "raid": { 00:15:21.425 "uuid": "b4f327f1-5089-4605-9653-f57e59af088a", 00:15:21.425 "strip_size_kb": 64, 00:15:21.425 "state": "online", 00:15:21.425 "raid_level": "raid5f", 00:15:21.425 "superblock": false, 00:15:21.425 "num_base_bdevs": 3, 00:15:21.425 "num_base_bdevs_discovered": 3, 00:15:21.425 "num_base_bdevs_operational": 3, 00:15:21.425 "base_bdevs_list": [ 00:15:21.425 { 00:15:21.425 "name": "BaseBdev1", 00:15:21.425 "uuid": "323c138d-97a8-47c2-8794-9b69ae4ec836", 00:15:21.425 "is_configured": true, 00:15:21.425 "data_offset": 0, 00:15:21.425 "data_size": 65536 00:15:21.425 }, 00:15:21.425 { 00:15:21.425 "name": "BaseBdev2", 00:15:21.425 "uuid": "42fe54cc-06f4-44fa-8c5b-3d243bdac858", 00:15:21.425 "is_configured": true, 00:15:21.425 "data_offset": 0, 00:15:21.425 "data_size": 65536 00:15:21.425 }, 00:15:21.425 { 00:15:21.425 "name": "BaseBdev3", 00:15:21.425 "uuid": "3aa139b6-b639-4af2-9540-97037157b88a", 00:15:21.425 "is_configured": true, 00:15:21.425 "data_offset": 0, 00:15:21.425 "data_size": 65536 00:15:21.425 } 00:15:21.425 ] 00:15:21.425 } 00:15:21.425 } 00:15:21.425 }' 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:21.425 BaseBdev2 00:15:21.425 BaseBdev3' 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.425 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.686 [2024-11-26 18:00:03.301633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.686 "name": "Existed_Raid", 00:15:21.686 "uuid": "b4f327f1-5089-4605-9653-f57e59af088a", 00:15:21.686 "strip_size_kb": 64, 00:15:21.686 "state": "online", 00:15:21.686 "raid_level": "raid5f", 00:15:21.686 "superblock": false, 00:15:21.686 "num_base_bdevs": 3, 00:15:21.686 "num_base_bdevs_discovered": 2, 00:15:21.686 "num_base_bdevs_operational": 2, 00:15:21.686 "base_bdevs_list": [ 00:15:21.686 { 00:15:21.686 "name": null, 00:15:21.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.686 "is_configured": false, 00:15:21.686 "data_offset": 0, 00:15:21.686 "data_size": 65536 00:15:21.686 }, 00:15:21.686 { 00:15:21.686 "name": "BaseBdev2", 00:15:21.686 "uuid": "42fe54cc-06f4-44fa-8c5b-3d243bdac858", 00:15:21.686 "is_configured": true, 00:15:21.686 "data_offset": 0, 00:15:21.686 "data_size": 65536 00:15:21.686 }, 00:15:21.686 { 00:15:21.686 "name": "BaseBdev3", 00:15:21.686 "uuid": "3aa139b6-b639-4af2-9540-97037157b88a", 00:15:21.686 "is_configured": true, 00:15:21.686 "data_offset": 0, 00:15:21.686 "data_size": 65536 00:15:21.686 } 00:15:21.686 ] 00:15:21.686 }' 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.686 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.254 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:22.254 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:22.254 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.254 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.254 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:22.254 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.254 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.254 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:22.254 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:22.255 18:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:22.255 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.255 18:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.255 [2024-11-26 18:00:03.928708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:22.255 [2024-11-26 18:00:03.928838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.255 [2024-11-26 18:00:04.045378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.255 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.255 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:22.255 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:22.255 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.255 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.255 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.255 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:22.255 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.255 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:22.255 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:22.255 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:22.255 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.255 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.255 [2024-11-26 18:00:04.109365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:22.255 [2024-11-26 18:00:04.109454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.515 BaseBdev2 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.515 [ 00:15:22.515 { 00:15:22.515 "name": "BaseBdev2", 00:15:22.515 "aliases": [ 00:15:22.515 "41ca5f28-1feb-4dd7-baea-f0b6d536290d" 00:15:22.515 ], 00:15:22.515 "product_name": "Malloc disk", 00:15:22.515 "block_size": 512, 00:15:22.515 "num_blocks": 65536, 00:15:22.515 "uuid": "41ca5f28-1feb-4dd7-baea-f0b6d536290d", 00:15:22.515 "assigned_rate_limits": { 00:15:22.515 "rw_ios_per_sec": 0, 00:15:22.515 "rw_mbytes_per_sec": 0, 00:15:22.515 "r_mbytes_per_sec": 0, 00:15:22.515 "w_mbytes_per_sec": 0 00:15:22.515 }, 00:15:22.515 "claimed": false, 00:15:22.515 "zoned": false, 00:15:22.515 "supported_io_types": { 00:15:22.515 "read": true, 00:15:22.515 "write": true, 00:15:22.515 "unmap": true, 00:15:22.515 "flush": true, 00:15:22.515 "reset": true, 00:15:22.515 "nvme_admin": false, 00:15:22.515 "nvme_io": false, 00:15:22.515 "nvme_io_md": false, 00:15:22.515 "write_zeroes": true, 00:15:22.515 "zcopy": true, 00:15:22.515 "get_zone_info": false, 00:15:22.515 "zone_management": false, 00:15:22.515 "zone_append": false, 00:15:22.515 "compare": false, 00:15:22.515 "compare_and_write": false, 00:15:22.515 "abort": true, 00:15:22.515 "seek_hole": false, 00:15:22.515 "seek_data": false, 00:15:22.515 "copy": true, 00:15:22.515 "nvme_iov_md": false 00:15:22.515 }, 00:15:22.515 "memory_domains": [ 00:15:22.515 { 00:15:22.515 "dma_device_id": "system", 00:15:22.515 "dma_device_type": 1 00:15:22.515 }, 00:15:22.515 { 00:15:22.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.515 "dma_device_type": 2 00:15:22.515 } 00:15:22.515 ], 00:15:22.515 "driver_specific": {} 00:15:22.515 } 00:15:22.515 ] 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:22.515 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.516 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.776 BaseBdev3 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.776 [ 00:15:22.776 { 00:15:22.776 "name": "BaseBdev3", 00:15:22.776 "aliases": [ 00:15:22.776 "3690a38e-35d8-4286-8c85-0de86cb8b8fa" 00:15:22.776 ], 00:15:22.776 "product_name": "Malloc disk", 00:15:22.776 "block_size": 512, 00:15:22.776 "num_blocks": 65536, 00:15:22.776 "uuid": "3690a38e-35d8-4286-8c85-0de86cb8b8fa", 00:15:22.776 "assigned_rate_limits": { 00:15:22.776 "rw_ios_per_sec": 0, 00:15:22.776 "rw_mbytes_per_sec": 0, 00:15:22.776 "r_mbytes_per_sec": 0, 00:15:22.776 "w_mbytes_per_sec": 0 00:15:22.776 }, 00:15:22.776 "claimed": false, 00:15:22.776 "zoned": false, 00:15:22.776 "supported_io_types": { 00:15:22.776 "read": true, 00:15:22.776 "write": true, 00:15:22.776 "unmap": true, 00:15:22.776 "flush": true, 00:15:22.776 "reset": true, 00:15:22.776 "nvme_admin": false, 00:15:22.776 "nvme_io": false, 00:15:22.776 "nvme_io_md": false, 00:15:22.776 "write_zeroes": true, 00:15:22.776 "zcopy": true, 00:15:22.776 "get_zone_info": false, 00:15:22.776 "zone_management": false, 00:15:22.776 "zone_append": false, 00:15:22.776 "compare": false, 00:15:22.776 "compare_and_write": false, 00:15:22.776 "abort": true, 00:15:22.776 "seek_hole": false, 00:15:22.776 "seek_data": false, 00:15:22.776 "copy": true, 00:15:22.776 "nvme_iov_md": false 00:15:22.776 }, 00:15:22.776 "memory_domains": [ 00:15:22.776 { 00:15:22.776 "dma_device_id": "system", 00:15:22.776 "dma_device_type": 1 00:15:22.776 }, 00:15:22.776 { 00:15:22.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.776 "dma_device_type": 2 00:15:22.776 } 00:15:22.776 ], 00:15:22.776 "driver_specific": {} 00:15:22.776 } 00:15:22.776 ] 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.776 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.776 [2024-11-26 18:00:04.458389] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.777 [2024-11-26 18:00:04.458566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.777 [2024-11-26 18:00:04.458638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.777 [2024-11-26 18:00:04.460908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.777 "name": "Existed_Raid", 00:15:22.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.777 "strip_size_kb": 64, 00:15:22.777 "state": "configuring", 00:15:22.777 "raid_level": "raid5f", 00:15:22.777 "superblock": false, 00:15:22.777 "num_base_bdevs": 3, 00:15:22.777 "num_base_bdevs_discovered": 2, 00:15:22.777 "num_base_bdevs_operational": 3, 00:15:22.777 "base_bdevs_list": [ 00:15:22.777 { 00:15:22.777 "name": "BaseBdev1", 00:15:22.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.777 "is_configured": false, 00:15:22.777 "data_offset": 0, 00:15:22.777 "data_size": 0 00:15:22.777 }, 00:15:22.777 { 00:15:22.777 "name": "BaseBdev2", 00:15:22.777 "uuid": "41ca5f28-1feb-4dd7-baea-f0b6d536290d", 00:15:22.777 "is_configured": true, 00:15:22.777 "data_offset": 0, 00:15:22.777 "data_size": 65536 00:15:22.777 }, 00:15:22.777 { 00:15:22.777 "name": "BaseBdev3", 00:15:22.777 "uuid": "3690a38e-35d8-4286-8c85-0de86cb8b8fa", 00:15:22.777 "is_configured": true, 00:15:22.777 "data_offset": 0, 00:15:22.777 "data_size": 65536 00:15:22.777 } 00:15:22.777 ] 00:15:22.777 }' 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.777 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.345 [2024-11-26 18:00:04.965613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.345 18:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.345 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.345 "name": "Existed_Raid", 00:15:23.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.345 "strip_size_kb": 64, 00:15:23.345 "state": "configuring", 00:15:23.345 "raid_level": "raid5f", 00:15:23.345 "superblock": false, 00:15:23.345 "num_base_bdevs": 3, 00:15:23.345 "num_base_bdevs_discovered": 1, 00:15:23.345 "num_base_bdevs_operational": 3, 00:15:23.345 "base_bdevs_list": [ 00:15:23.345 { 00:15:23.345 "name": "BaseBdev1", 00:15:23.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.345 "is_configured": false, 00:15:23.345 "data_offset": 0, 00:15:23.345 "data_size": 0 00:15:23.345 }, 00:15:23.345 { 00:15:23.345 "name": null, 00:15:23.345 "uuid": "41ca5f28-1feb-4dd7-baea-f0b6d536290d", 00:15:23.345 "is_configured": false, 00:15:23.345 "data_offset": 0, 00:15:23.345 "data_size": 65536 00:15:23.345 }, 00:15:23.345 { 00:15:23.345 "name": "BaseBdev3", 00:15:23.345 "uuid": "3690a38e-35d8-4286-8c85-0de86cb8b8fa", 00:15:23.345 "is_configured": true, 00:15:23.345 "data_offset": 0, 00:15:23.345 "data_size": 65536 00:15:23.345 } 00:15:23.345 ] 00:15:23.345 }' 00:15:23.345 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.345 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.604 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.604 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:23.605 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.605 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.605 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.605 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:23.605 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:23.605 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.605 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.605 [2024-11-26 18:00:05.465113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.605 BaseBdev1 00:15:23.864 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.864 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:23.864 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:23.864 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:23.864 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.865 [ 00:15:23.865 { 00:15:23.865 "name": "BaseBdev1", 00:15:23.865 "aliases": [ 00:15:23.865 "022c1880-6a80-4e95-875b-b3f9ca622200" 00:15:23.865 ], 00:15:23.865 "product_name": "Malloc disk", 00:15:23.865 "block_size": 512, 00:15:23.865 "num_blocks": 65536, 00:15:23.865 "uuid": "022c1880-6a80-4e95-875b-b3f9ca622200", 00:15:23.865 "assigned_rate_limits": { 00:15:23.865 "rw_ios_per_sec": 0, 00:15:23.865 "rw_mbytes_per_sec": 0, 00:15:23.865 "r_mbytes_per_sec": 0, 00:15:23.865 "w_mbytes_per_sec": 0 00:15:23.865 }, 00:15:23.865 "claimed": true, 00:15:23.865 "claim_type": "exclusive_write", 00:15:23.865 "zoned": false, 00:15:23.865 "supported_io_types": { 00:15:23.865 "read": true, 00:15:23.865 "write": true, 00:15:23.865 "unmap": true, 00:15:23.865 "flush": true, 00:15:23.865 "reset": true, 00:15:23.865 "nvme_admin": false, 00:15:23.865 "nvme_io": false, 00:15:23.865 "nvme_io_md": false, 00:15:23.865 "write_zeroes": true, 00:15:23.865 "zcopy": true, 00:15:23.865 "get_zone_info": false, 00:15:23.865 "zone_management": false, 00:15:23.865 "zone_append": false, 00:15:23.865 "compare": false, 00:15:23.865 "compare_and_write": false, 00:15:23.865 "abort": true, 00:15:23.865 "seek_hole": false, 00:15:23.865 "seek_data": false, 00:15:23.865 "copy": true, 00:15:23.865 "nvme_iov_md": false 00:15:23.865 }, 00:15:23.865 "memory_domains": [ 00:15:23.865 { 00:15:23.865 "dma_device_id": "system", 00:15:23.865 "dma_device_type": 1 00:15:23.865 }, 00:15:23.865 { 00:15:23.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.865 "dma_device_type": 2 00:15:23.865 } 00:15:23.865 ], 00:15:23.865 "driver_specific": {} 00:15:23.865 } 00:15:23.865 ] 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.865 "name": "Existed_Raid", 00:15:23.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.865 "strip_size_kb": 64, 00:15:23.865 "state": "configuring", 00:15:23.865 "raid_level": "raid5f", 00:15:23.865 "superblock": false, 00:15:23.865 "num_base_bdevs": 3, 00:15:23.865 "num_base_bdevs_discovered": 2, 00:15:23.865 "num_base_bdevs_operational": 3, 00:15:23.865 "base_bdevs_list": [ 00:15:23.865 { 00:15:23.865 "name": "BaseBdev1", 00:15:23.865 "uuid": "022c1880-6a80-4e95-875b-b3f9ca622200", 00:15:23.865 "is_configured": true, 00:15:23.865 "data_offset": 0, 00:15:23.865 "data_size": 65536 00:15:23.865 }, 00:15:23.865 { 00:15:23.865 "name": null, 00:15:23.865 "uuid": "41ca5f28-1feb-4dd7-baea-f0b6d536290d", 00:15:23.865 "is_configured": false, 00:15:23.865 "data_offset": 0, 00:15:23.865 "data_size": 65536 00:15:23.865 }, 00:15:23.865 { 00:15:23.865 "name": "BaseBdev3", 00:15:23.865 "uuid": "3690a38e-35d8-4286-8c85-0de86cb8b8fa", 00:15:23.865 "is_configured": true, 00:15:23.865 "data_offset": 0, 00:15:23.865 "data_size": 65536 00:15:23.865 } 00:15:23.865 ] 00:15:23.865 }' 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.865 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.124 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:24.124 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.124 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.124 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.124 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.382 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:24.382 18:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:24.382 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.382 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.382 [2024-11-26 18:00:05.996387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:24.382 18:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.382 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.382 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.383 "name": "Existed_Raid", 00:15:24.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.383 "strip_size_kb": 64, 00:15:24.383 "state": "configuring", 00:15:24.383 "raid_level": "raid5f", 00:15:24.383 "superblock": false, 00:15:24.383 "num_base_bdevs": 3, 00:15:24.383 "num_base_bdevs_discovered": 1, 00:15:24.383 "num_base_bdevs_operational": 3, 00:15:24.383 "base_bdevs_list": [ 00:15:24.383 { 00:15:24.383 "name": "BaseBdev1", 00:15:24.383 "uuid": "022c1880-6a80-4e95-875b-b3f9ca622200", 00:15:24.383 "is_configured": true, 00:15:24.383 "data_offset": 0, 00:15:24.383 "data_size": 65536 00:15:24.383 }, 00:15:24.383 { 00:15:24.383 "name": null, 00:15:24.383 "uuid": "41ca5f28-1feb-4dd7-baea-f0b6d536290d", 00:15:24.383 "is_configured": false, 00:15:24.383 "data_offset": 0, 00:15:24.383 "data_size": 65536 00:15:24.383 }, 00:15:24.383 { 00:15:24.383 "name": null, 00:15:24.383 "uuid": "3690a38e-35d8-4286-8c85-0de86cb8b8fa", 00:15:24.383 "is_configured": false, 00:15:24.383 "data_offset": 0, 00:15:24.383 "data_size": 65536 00:15:24.383 } 00:15:24.383 ] 00:15:24.383 }' 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.383 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.641 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:24.641 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.641 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.641 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.641 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.927 [2024-11-26 18:00:06.515533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.927 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.927 "name": "Existed_Raid", 00:15:24.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.928 "strip_size_kb": 64, 00:15:24.928 "state": "configuring", 00:15:24.928 "raid_level": "raid5f", 00:15:24.928 "superblock": false, 00:15:24.928 "num_base_bdevs": 3, 00:15:24.928 "num_base_bdevs_discovered": 2, 00:15:24.928 "num_base_bdevs_operational": 3, 00:15:24.928 "base_bdevs_list": [ 00:15:24.928 { 00:15:24.928 "name": "BaseBdev1", 00:15:24.928 "uuid": "022c1880-6a80-4e95-875b-b3f9ca622200", 00:15:24.928 "is_configured": true, 00:15:24.928 "data_offset": 0, 00:15:24.928 "data_size": 65536 00:15:24.928 }, 00:15:24.928 { 00:15:24.928 "name": null, 00:15:24.928 "uuid": "41ca5f28-1feb-4dd7-baea-f0b6d536290d", 00:15:24.928 "is_configured": false, 00:15:24.928 "data_offset": 0, 00:15:24.928 "data_size": 65536 00:15:24.928 }, 00:15:24.928 { 00:15:24.928 "name": "BaseBdev3", 00:15:24.928 "uuid": "3690a38e-35d8-4286-8c85-0de86cb8b8fa", 00:15:24.928 "is_configured": true, 00:15:24.928 "data_offset": 0, 00:15:24.928 "data_size": 65536 00:15:24.928 } 00:15:24.928 ] 00:15:24.928 }' 00:15:24.928 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.928 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.187 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.187 18:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:25.187 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.187 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.187 18:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.187 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:25.187 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:25.187 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.187 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.187 [2024-11-26 18:00:07.014789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.446 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.446 "name": "Existed_Raid", 00:15:25.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.446 "strip_size_kb": 64, 00:15:25.447 "state": "configuring", 00:15:25.447 "raid_level": "raid5f", 00:15:25.447 "superblock": false, 00:15:25.447 "num_base_bdevs": 3, 00:15:25.447 "num_base_bdevs_discovered": 1, 00:15:25.447 "num_base_bdevs_operational": 3, 00:15:25.447 "base_bdevs_list": [ 00:15:25.447 { 00:15:25.447 "name": null, 00:15:25.447 "uuid": "022c1880-6a80-4e95-875b-b3f9ca622200", 00:15:25.447 "is_configured": false, 00:15:25.447 "data_offset": 0, 00:15:25.447 "data_size": 65536 00:15:25.447 }, 00:15:25.447 { 00:15:25.447 "name": null, 00:15:25.447 "uuid": "41ca5f28-1feb-4dd7-baea-f0b6d536290d", 00:15:25.447 "is_configured": false, 00:15:25.447 "data_offset": 0, 00:15:25.447 "data_size": 65536 00:15:25.447 }, 00:15:25.447 { 00:15:25.447 "name": "BaseBdev3", 00:15:25.447 "uuid": "3690a38e-35d8-4286-8c85-0de86cb8b8fa", 00:15:25.447 "is_configured": true, 00:15:25.447 "data_offset": 0, 00:15:25.447 "data_size": 65536 00:15:25.447 } 00:15:25.447 ] 00:15:25.447 }' 00:15:25.447 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.447 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.014 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.015 [2024-11-26 18:00:07.632385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.015 "name": "Existed_Raid", 00:15:26.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.015 "strip_size_kb": 64, 00:15:26.015 "state": "configuring", 00:15:26.015 "raid_level": "raid5f", 00:15:26.015 "superblock": false, 00:15:26.015 "num_base_bdevs": 3, 00:15:26.015 "num_base_bdevs_discovered": 2, 00:15:26.015 "num_base_bdevs_operational": 3, 00:15:26.015 "base_bdevs_list": [ 00:15:26.015 { 00:15:26.015 "name": null, 00:15:26.015 "uuid": "022c1880-6a80-4e95-875b-b3f9ca622200", 00:15:26.015 "is_configured": false, 00:15:26.015 "data_offset": 0, 00:15:26.015 "data_size": 65536 00:15:26.015 }, 00:15:26.015 { 00:15:26.015 "name": "BaseBdev2", 00:15:26.015 "uuid": "41ca5f28-1feb-4dd7-baea-f0b6d536290d", 00:15:26.015 "is_configured": true, 00:15:26.015 "data_offset": 0, 00:15:26.015 "data_size": 65536 00:15:26.015 }, 00:15:26.015 { 00:15:26.015 "name": "BaseBdev3", 00:15:26.015 "uuid": "3690a38e-35d8-4286-8c85-0de86cb8b8fa", 00:15:26.015 "is_configured": true, 00:15:26.015 "data_offset": 0, 00:15:26.015 "data_size": 65536 00:15:26.015 } 00:15:26.015 ] 00:15:26.015 }' 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.015 18:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.274 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.274 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.274 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.274 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:26.274 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 022c1880-6a80-4e95-875b-b3f9ca622200 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.535 [2024-11-26 18:00:08.246829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:26.535 [2024-11-26 18:00:08.246985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:26.535 [2024-11-26 18:00:08.247005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:26.535 [2024-11-26 18:00:08.247327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:26.535 [2024-11-26 18:00:08.253804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:26.535 [2024-11-26 18:00:08.253898] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:26.535 [2024-11-26 18:00:08.254266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.535 NewBaseBdev 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.535 [ 00:15:26.535 { 00:15:26.535 "name": "NewBaseBdev", 00:15:26.535 "aliases": [ 00:15:26.535 "022c1880-6a80-4e95-875b-b3f9ca622200" 00:15:26.535 ], 00:15:26.535 "product_name": "Malloc disk", 00:15:26.535 "block_size": 512, 00:15:26.535 "num_blocks": 65536, 00:15:26.535 "uuid": "022c1880-6a80-4e95-875b-b3f9ca622200", 00:15:26.535 "assigned_rate_limits": { 00:15:26.535 "rw_ios_per_sec": 0, 00:15:26.535 "rw_mbytes_per_sec": 0, 00:15:26.535 "r_mbytes_per_sec": 0, 00:15:26.535 "w_mbytes_per_sec": 0 00:15:26.535 }, 00:15:26.535 "claimed": true, 00:15:26.535 "claim_type": "exclusive_write", 00:15:26.535 "zoned": false, 00:15:26.535 "supported_io_types": { 00:15:26.535 "read": true, 00:15:26.535 "write": true, 00:15:26.535 "unmap": true, 00:15:26.535 "flush": true, 00:15:26.535 "reset": true, 00:15:26.535 "nvme_admin": false, 00:15:26.535 "nvme_io": false, 00:15:26.535 "nvme_io_md": false, 00:15:26.535 "write_zeroes": true, 00:15:26.535 "zcopy": true, 00:15:26.535 "get_zone_info": false, 00:15:26.535 "zone_management": false, 00:15:26.535 "zone_append": false, 00:15:26.535 "compare": false, 00:15:26.535 "compare_and_write": false, 00:15:26.535 "abort": true, 00:15:26.535 "seek_hole": false, 00:15:26.535 "seek_data": false, 00:15:26.535 "copy": true, 00:15:26.535 "nvme_iov_md": false 00:15:26.535 }, 00:15:26.535 "memory_domains": [ 00:15:26.535 { 00:15:26.535 "dma_device_id": "system", 00:15:26.535 "dma_device_type": 1 00:15:26.535 }, 00:15:26.535 { 00:15:26.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.535 "dma_device_type": 2 00:15:26.535 } 00:15:26.535 ], 00:15:26.535 "driver_specific": {} 00:15:26.535 } 00:15:26.535 ] 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.535 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.535 "name": "Existed_Raid", 00:15:26.535 "uuid": "e8a3aaac-a2d7-4a4b-94d5-1e725fe1e3a6", 00:15:26.535 "strip_size_kb": 64, 00:15:26.535 "state": "online", 00:15:26.535 "raid_level": "raid5f", 00:15:26.535 "superblock": false, 00:15:26.535 "num_base_bdevs": 3, 00:15:26.535 "num_base_bdevs_discovered": 3, 00:15:26.535 "num_base_bdevs_operational": 3, 00:15:26.535 "base_bdevs_list": [ 00:15:26.535 { 00:15:26.535 "name": "NewBaseBdev", 00:15:26.535 "uuid": "022c1880-6a80-4e95-875b-b3f9ca622200", 00:15:26.535 "is_configured": true, 00:15:26.535 "data_offset": 0, 00:15:26.535 "data_size": 65536 00:15:26.535 }, 00:15:26.535 { 00:15:26.535 "name": "BaseBdev2", 00:15:26.535 "uuid": "41ca5f28-1feb-4dd7-baea-f0b6d536290d", 00:15:26.535 "is_configured": true, 00:15:26.535 "data_offset": 0, 00:15:26.535 "data_size": 65536 00:15:26.535 }, 00:15:26.535 { 00:15:26.535 "name": "BaseBdev3", 00:15:26.535 "uuid": "3690a38e-35d8-4286-8c85-0de86cb8b8fa", 00:15:26.535 "is_configured": true, 00:15:26.535 "data_offset": 0, 00:15:26.536 "data_size": 65536 00:15:26.536 } 00:15:26.536 ] 00:15:26.536 }' 00:15:26.536 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.536 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.104 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:27.104 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:27.104 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:27.104 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:27.104 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:27.104 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:27.104 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:27.104 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:27.104 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.104 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.104 [2024-11-26 18:00:08.797686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.104 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.104 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:27.104 "name": "Existed_Raid", 00:15:27.104 "aliases": [ 00:15:27.104 "e8a3aaac-a2d7-4a4b-94d5-1e725fe1e3a6" 00:15:27.104 ], 00:15:27.104 "product_name": "Raid Volume", 00:15:27.104 "block_size": 512, 00:15:27.104 "num_blocks": 131072, 00:15:27.104 "uuid": "e8a3aaac-a2d7-4a4b-94d5-1e725fe1e3a6", 00:15:27.104 "assigned_rate_limits": { 00:15:27.104 "rw_ios_per_sec": 0, 00:15:27.104 "rw_mbytes_per_sec": 0, 00:15:27.104 "r_mbytes_per_sec": 0, 00:15:27.104 "w_mbytes_per_sec": 0 00:15:27.104 }, 00:15:27.104 "claimed": false, 00:15:27.104 "zoned": false, 00:15:27.104 "supported_io_types": { 00:15:27.104 "read": true, 00:15:27.104 "write": true, 00:15:27.104 "unmap": false, 00:15:27.104 "flush": false, 00:15:27.104 "reset": true, 00:15:27.104 "nvme_admin": false, 00:15:27.104 "nvme_io": false, 00:15:27.104 "nvme_io_md": false, 00:15:27.104 "write_zeroes": true, 00:15:27.104 "zcopy": false, 00:15:27.104 "get_zone_info": false, 00:15:27.104 "zone_management": false, 00:15:27.104 "zone_append": false, 00:15:27.104 "compare": false, 00:15:27.104 "compare_and_write": false, 00:15:27.104 "abort": false, 00:15:27.104 "seek_hole": false, 00:15:27.104 "seek_data": false, 00:15:27.104 "copy": false, 00:15:27.104 "nvme_iov_md": false 00:15:27.104 }, 00:15:27.104 "driver_specific": { 00:15:27.104 "raid": { 00:15:27.104 "uuid": "e8a3aaac-a2d7-4a4b-94d5-1e725fe1e3a6", 00:15:27.104 "strip_size_kb": 64, 00:15:27.104 "state": "online", 00:15:27.104 "raid_level": "raid5f", 00:15:27.104 "superblock": false, 00:15:27.104 "num_base_bdevs": 3, 00:15:27.104 "num_base_bdevs_discovered": 3, 00:15:27.104 "num_base_bdevs_operational": 3, 00:15:27.104 "base_bdevs_list": [ 00:15:27.104 { 00:15:27.104 "name": "NewBaseBdev", 00:15:27.104 "uuid": "022c1880-6a80-4e95-875b-b3f9ca622200", 00:15:27.104 "is_configured": true, 00:15:27.104 "data_offset": 0, 00:15:27.104 "data_size": 65536 00:15:27.104 }, 00:15:27.104 { 00:15:27.104 "name": "BaseBdev2", 00:15:27.104 "uuid": "41ca5f28-1feb-4dd7-baea-f0b6d536290d", 00:15:27.104 "is_configured": true, 00:15:27.104 "data_offset": 0, 00:15:27.104 "data_size": 65536 00:15:27.104 }, 00:15:27.104 { 00:15:27.104 "name": "BaseBdev3", 00:15:27.104 "uuid": "3690a38e-35d8-4286-8c85-0de86cb8b8fa", 00:15:27.104 "is_configured": true, 00:15:27.105 "data_offset": 0, 00:15:27.105 "data_size": 65536 00:15:27.105 } 00:15:27.105 ] 00:15:27.105 } 00:15:27.105 } 00:15:27.105 }' 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:27.105 BaseBdev2 00:15:27.105 BaseBdev3' 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.105 18:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.373 18:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.373 [2024-11-26 18:00:09.064985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:27.373 [2024-11-26 18:00:09.065125] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.373 [2024-11-26 18:00:09.065283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.373 [2024-11-26 18:00:09.065681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.373 [2024-11-26 18:00:09.065755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80267 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80267 ']' 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80267 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80267 00:15:27.373 killing process with pid 80267 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80267' 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80267 00:15:27.373 [2024-11-26 18:00:09.112358] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.373 18:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80267 00:15:27.632 [2024-11-26 18:00:09.469310] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:29.008 18:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:29.008 00:15:29.008 real 0m11.353s 00:15:29.008 user 0m17.929s 00:15:29.008 sys 0m1.970s 00:15:29.008 ************************************ 00:15:29.008 END TEST raid5f_state_function_test 00:15:29.008 ************************************ 00:15:29.008 18:00:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.008 18:00:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.008 18:00:10 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:29.008 18:00:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:29.008 18:00:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.008 18:00:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:29.008 ************************************ 00:15:29.008 START TEST raid5f_state_function_test_sb 00:15:29.009 ************************************ 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80897 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80897' 00:15:29.009 Process raid pid: 80897 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80897 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80897 ']' 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.009 18:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.269 [2024-11-26 18:00:10.909728] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:15:29.269 [2024-11-26 18:00:10.909925] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.269 [2024-11-26 18:00:11.086779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.530 [2024-11-26 18:00:11.213925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.789 [2024-11-26 18:00:11.434880] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.789 [2024-11-26 18:00:11.435012] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.049 [2024-11-26 18:00:11.777668] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.049 [2024-11-26 18:00:11.777823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.049 [2024-11-26 18:00:11.777901] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.049 [2024-11-26 18:00:11.777957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.049 [2024-11-26 18:00:11.778009] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:30.049 [2024-11-26 18:00:11.778076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.049 "name": "Existed_Raid", 00:15:30.049 "uuid": "7c38a336-b30d-4f60-8b08-e8d27f95b2c7", 00:15:30.049 "strip_size_kb": 64, 00:15:30.049 "state": "configuring", 00:15:30.049 "raid_level": "raid5f", 00:15:30.049 "superblock": true, 00:15:30.049 "num_base_bdevs": 3, 00:15:30.049 "num_base_bdevs_discovered": 0, 00:15:30.049 "num_base_bdevs_operational": 3, 00:15:30.049 "base_bdevs_list": [ 00:15:30.049 { 00:15:30.049 "name": "BaseBdev1", 00:15:30.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.049 "is_configured": false, 00:15:30.049 "data_offset": 0, 00:15:30.049 "data_size": 0 00:15:30.049 }, 00:15:30.049 { 00:15:30.049 "name": "BaseBdev2", 00:15:30.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.049 "is_configured": false, 00:15:30.049 "data_offset": 0, 00:15:30.049 "data_size": 0 00:15:30.049 }, 00:15:30.049 { 00:15:30.049 "name": "BaseBdev3", 00:15:30.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.049 "is_configured": false, 00:15:30.049 "data_offset": 0, 00:15:30.049 "data_size": 0 00:15:30.049 } 00:15:30.049 ] 00:15:30.049 }' 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.049 18:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.618 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:30.618 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.618 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.618 [2024-11-26 18:00:12.220795] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.618 [2024-11-26 18:00:12.220897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:30.618 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.618 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:30.618 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.618 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.618 [2024-11-26 18:00:12.232781] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.618 [2024-11-26 18:00:12.232877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.618 [2024-11-26 18:00:12.232923] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.618 [2024-11-26 18:00:12.232951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.618 [2024-11-26 18:00:12.232986] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:30.618 [2024-11-26 18:00:12.233012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.619 [2024-11-26 18:00:12.286236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.619 BaseBdev1 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.619 [ 00:15:30.619 { 00:15:30.619 "name": "BaseBdev1", 00:15:30.619 "aliases": [ 00:15:30.619 "0e42c809-88e0-4354-a8d4-f270f6090fdb" 00:15:30.619 ], 00:15:30.619 "product_name": "Malloc disk", 00:15:30.619 "block_size": 512, 00:15:30.619 "num_blocks": 65536, 00:15:30.619 "uuid": "0e42c809-88e0-4354-a8d4-f270f6090fdb", 00:15:30.619 "assigned_rate_limits": { 00:15:30.619 "rw_ios_per_sec": 0, 00:15:30.619 "rw_mbytes_per_sec": 0, 00:15:30.619 "r_mbytes_per_sec": 0, 00:15:30.619 "w_mbytes_per_sec": 0 00:15:30.619 }, 00:15:30.619 "claimed": true, 00:15:30.619 "claim_type": "exclusive_write", 00:15:30.619 "zoned": false, 00:15:30.619 "supported_io_types": { 00:15:30.619 "read": true, 00:15:30.619 "write": true, 00:15:30.619 "unmap": true, 00:15:30.619 "flush": true, 00:15:30.619 "reset": true, 00:15:30.619 "nvme_admin": false, 00:15:30.619 "nvme_io": false, 00:15:30.619 "nvme_io_md": false, 00:15:30.619 "write_zeroes": true, 00:15:30.619 "zcopy": true, 00:15:30.619 "get_zone_info": false, 00:15:30.619 "zone_management": false, 00:15:30.619 "zone_append": false, 00:15:30.619 "compare": false, 00:15:30.619 "compare_and_write": false, 00:15:30.619 "abort": true, 00:15:30.619 "seek_hole": false, 00:15:30.619 "seek_data": false, 00:15:30.619 "copy": true, 00:15:30.619 "nvme_iov_md": false 00:15:30.619 }, 00:15:30.619 "memory_domains": [ 00:15:30.619 { 00:15:30.619 "dma_device_id": "system", 00:15:30.619 "dma_device_type": 1 00:15:30.619 }, 00:15:30.619 { 00:15:30.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.619 "dma_device_type": 2 00:15:30.619 } 00:15:30.619 ], 00:15:30.619 "driver_specific": {} 00:15:30.619 } 00:15:30.619 ] 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.619 "name": "Existed_Raid", 00:15:30.619 "uuid": "dfea2663-289f-4284-ae51-a1bb46d77001", 00:15:30.619 "strip_size_kb": 64, 00:15:30.619 "state": "configuring", 00:15:30.619 "raid_level": "raid5f", 00:15:30.619 "superblock": true, 00:15:30.619 "num_base_bdevs": 3, 00:15:30.619 "num_base_bdevs_discovered": 1, 00:15:30.619 "num_base_bdevs_operational": 3, 00:15:30.619 "base_bdevs_list": [ 00:15:30.619 { 00:15:30.619 "name": "BaseBdev1", 00:15:30.619 "uuid": "0e42c809-88e0-4354-a8d4-f270f6090fdb", 00:15:30.619 "is_configured": true, 00:15:30.619 "data_offset": 2048, 00:15:30.619 "data_size": 63488 00:15:30.619 }, 00:15:30.619 { 00:15:30.619 "name": "BaseBdev2", 00:15:30.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.619 "is_configured": false, 00:15:30.619 "data_offset": 0, 00:15:30.619 "data_size": 0 00:15:30.619 }, 00:15:30.619 { 00:15:30.619 "name": "BaseBdev3", 00:15:30.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.619 "is_configured": false, 00:15:30.619 "data_offset": 0, 00:15:30.619 "data_size": 0 00:15:30.619 } 00:15:30.619 ] 00:15:30.619 }' 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.619 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.187 [2024-11-26 18:00:12.785503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.187 [2024-11-26 18:00:12.785615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.187 [2024-11-26 18:00:12.797565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.187 [2024-11-26 18:00:12.799750] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.187 [2024-11-26 18:00:12.799842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.187 [2024-11-26 18:00:12.799878] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:31.187 [2024-11-26 18:00:12.799907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.187 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.187 "name": "Existed_Raid", 00:15:31.187 "uuid": "69d31e83-c6e1-4437-ad53-f6312aa6a04a", 00:15:31.187 "strip_size_kb": 64, 00:15:31.187 "state": "configuring", 00:15:31.187 "raid_level": "raid5f", 00:15:31.187 "superblock": true, 00:15:31.187 "num_base_bdevs": 3, 00:15:31.187 "num_base_bdevs_discovered": 1, 00:15:31.187 "num_base_bdevs_operational": 3, 00:15:31.187 "base_bdevs_list": [ 00:15:31.187 { 00:15:31.187 "name": "BaseBdev1", 00:15:31.187 "uuid": "0e42c809-88e0-4354-a8d4-f270f6090fdb", 00:15:31.187 "is_configured": true, 00:15:31.187 "data_offset": 2048, 00:15:31.187 "data_size": 63488 00:15:31.187 }, 00:15:31.188 { 00:15:31.188 "name": "BaseBdev2", 00:15:31.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.188 "is_configured": false, 00:15:31.188 "data_offset": 0, 00:15:31.188 "data_size": 0 00:15:31.188 }, 00:15:31.188 { 00:15:31.188 "name": "BaseBdev3", 00:15:31.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.188 "is_configured": false, 00:15:31.188 "data_offset": 0, 00:15:31.188 "data_size": 0 00:15:31.188 } 00:15:31.188 ] 00:15:31.188 }' 00:15:31.188 18:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.188 18:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.448 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.449 [2024-11-26 18:00:13.290341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.449 BaseBdev2 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.449 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.707 [ 00:15:31.707 { 00:15:31.707 "name": "BaseBdev2", 00:15:31.707 "aliases": [ 00:15:31.707 "30b15390-50a5-4f19-87c2-ea76ceebceca" 00:15:31.707 ], 00:15:31.707 "product_name": "Malloc disk", 00:15:31.707 "block_size": 512, 00:15:31.707 "num_blocks": 65536, 00:15:31.708 "uuid": "30b15390-50a5-4f19-87c2-ea76ceebceca", 00:15:31.708 "assigned_rate_limits": { 00:15:31.708 "rw_ios_per_sec": 0, 00:15:31.708 "rw_mbytes_per_sec": 0, 00:15:31.708 "r_mbytes_per_sec": 0, 00:15:31.708 "w_mbytes_per_sec": 0 00:15:31.708 }, 00:15:31.708 "claimed": true, 00:15:31.708 "claim_type": "exclusive_write", 00:15:31.708 "zoned": false, 00:15:31.708 "supported_io_types": { 00:15:31.708 "read": true, 00:15:31.708 "write": true, 00:15:31.708 "unmap": true, 00:15:31.708 "flush": true, 00:15:31.708 "reset": true, 00:15:31.708 "nvme_admin": false, 00:15:31.708 "nvme_io": false, 00:15:31.708 "nvme_io_md": false, 00:15:31.708 "write_zeroes": true, 00:15:31.708 "zcopy": true, 00:15:31.708 "get_zone_info": false, 00:15:31.708 "zone_management": false, 00:15:31.708 "zone_append": false, 00:15:31.708 "compare": false, 00:15:31.708 "compare_and_write": false, 00:15:31.708 "abort": true, 00:15:31.708 "seek_hole": false, 00:15:31.708 "seek_data": false, 00:15:31.708 "copy": true, 00:15:31.708 "nvme_iov_md": false 00:15:31.708 }, 00:15:31.708 "memory_domains": [ 00:15:31.708 { 00:15:31.708 "dma_device_id": "system", 00:15:31.708 "dma_device_type": 1 00:15:31.708 }, 00:15:31.708 { 00:15:31.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.708 "dma_device_type": 2 00:15:31.708 } 00:15:31.708 ], 00:15:31.708 "driver_specific": {} 00:15:31.708 } 00:15:31.708 ] 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.708 "name": "Existed_Raid", 00:15:31.708 "uuid": "69d31e83-c6e1-4437-ad53-f6312aa6a04a", 00:15:31.708 "strip_size_kb": 64, 00:15:31.708 "state": "configuring", 00:15:31.708 "raid_level": "raid5f", 00:15:31.708 "superblock": true, 00:15:31.708 "num_base_bdevs": 3, 00:15:31.708 "num_base_bdevs_discovered": 2, 00:15:31.708 "num_base_bdevs_operational": 3, 00:15:31.708 "base_bdevs_list": [ 00:15:31.708 { 00:15:31.708 "name": "BaseBdev1", 00:15:31.708 "uuid": "0e42c809-88e0-4354-a8d4-f270f6090fdb", 00:15:31.708 "is_configured": true, 00:15:31.708 "data_offset": 2048, 00:15:31.708 "data_size": 63488 00:15:31.708 }, 00:15:31.708 { 00:15:31.708 "name": "BaseBdev2", 00:15:31.708 "uuid": "30b15390-50a5-4f19-87c2-ea76ceebceca", 00:15:31.708 "is_configured": true, 00:15:31.708 "data_offset": 2048, 00:15:31.708 "data_size": 63488 00:15:31.708 }, 00:15:31.708 { 00:15:31.708 "name": "BaseBdev3", 00:15:31.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.708 "is_configured": false, 00:15:31.708 "data_offset": 0, 00:15:31.708 "data_size": 0 00:15:31.708 } 00:15:31.708 ] 00:15:31.708 }' 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.708 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.967 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:31.967 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.967 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.226 [2024-11-26 18:00:13.831065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:32.226 [2024-11-26 18:00:13.831511] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:32.226 BaseBdev3 00:15:32.226 [2024-11-26 18:00:13.831575] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:32.226 [2024-11-26 18:00:13.831880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.226 [2024-11-26 18:00:13.838390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:32.226 [2024-11-26 18:00:13.838454] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:32.226 [2024-11-26 18:00:13.838696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.226 [ 00:15:32.226 { 00:15:32.226 "name": "BaseBdev3", 00:15:32.226 "aliases": [ 00:15:32.226 "a1af4283-976d-4df9-ae08-bfe05f882242" 00:15:32.226 ], 00:15:32.226 "product_name": "Malloc disk", 00:15:32.226 "block_size": 512, 00:15:32.226 "num_blocks": 65536, 00:15:32.226 "uuid": "a1af4283-976d-4df9-ae08-bfe05f882242", 00:15:32.226 "assigned_rate_limits": { 00:15:32.226 "rw_ios_per_sec": 0, 00:15:32.226 "rw_mbytes_per_sec": 0, 00:15:32.226 "r_mbytes_per_sec": 0, 00:15:32.226 "w_mbytes_per_sec": 0 00:15:32.226 }, 00:15:32.226 "claimed": true, 00:15:32.226 "claim_type": "exclusive_write", 00:15:32.226 "zoned": false, 00:15:32.226 "supported_io_types": { 00:15:32.226 "read": true, 00:15:32.226 "write": true, 00:15:32.226 "unmap": true, 00:15:32.226 "flush": true, 00:15:32.226 "reset": true, 00:15:32.226 "nvme_admin": false, 00:15:32.226 "nvme_io": false, 00:15:32.226 "nvme_io_md": false, 00:15:32.226 "write_zeroes": true, 00:15:32.226 "zcopy": true, 00:15:32.226 "get_zone_info": false, 00:15:32.226 "zone_management": false, 00:15:32.226 "zone_append": false, 00:15:32.226 "compare": false, 00:15:32.226 "compare_and_write": false, 00:15:32.226 "abort": true, 00:15:32.226 "seek_hole": false, 00:15:32.226 "seek_data": false, 00:15:32.226 "copy": true, 00:15:32.226 "nvme_iov_md": false 00:15:32.226 }, 00:15:32.226 "memory_domains": [ 00:15:32.226 { 00:15:32.226 "dma_device_id": "system", 00:15:32.226 "dma_device_type": 1 00:15:32.226 }, 00:15:32.226 { 00:15:32.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.226 "dma_device_type": 2 00:15:32.226 } 00:15:32.226 ], 00:15:32.226 "driver_specific": {} 00:15:32.226 } 00:15:32.226 ] 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.226 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.227 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.227 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.227 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.227 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.227 "name": "Existed_Raid", 00:15:32.227 "uuid": "69d31e83-c6e1-4437-ad53-f6312aa6a04a", 00:15:32.227 "strip_size_kb": 64, 00:15:32.227 "state": "online", 00:15:32.227 "raid_level": "raid5f", 00:15:32.227 "superblock": true, 00:15:32.227 "num_base_bdevs": 3, 00:15:32.227 "num_base_bdevs_discovered": 3, 00:15:32.227 "num_base_bdevs_operational": 3, 00:15:32.227 "base_bdevs_list": [ 00:15:32.227 { 00:15:32.227 "name": "BaseBdev1", 00:15:32.227 "uuid": "0e42c809-88e0-4354-a8d4-f270f6090fdb", 00:15:32.227 "is_configured": true, 00:15:32.227 "data_offset": 2048, 00:15:32.227 "data_size": 63488 00:15:32.227 }, 00:15:32.227 { 00:15:32.227 "name": "BaseBdev2", 00:15:32.227 "uuid": "30b15390-50a5-4f19-87c2-ea76ceebceca", 00:15:32.227 "is_configured": true, 00:15:32.227 "data_offset": 2048, 00:15:32.227 "data_size": 63488 00:15:32.227 }, 00:15:32.227 { 00:15:32.227 "name": "BaseBdev3", 00:15:32.227 "uuid": "a1af4283-976d-4df9-ae08-bfe05f882242", 00:15:32.227 "is_configured": true, 00:15:32.227 "data_offset": 2048, 00:15:32.227 "data_size": 63488 00:15:32.227 } 00:15:32.227 ] 00:15:32.227 }' 00:15:32.227 18:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.227 18:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.485 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:32.485 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:32.485 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:32.485 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:32.485 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:32.485 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:32.485 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:32.485 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.485 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.485 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:32.485 [2024-11-26 18:00:14.281430] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.485 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.485 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:32.485 "name": "Existed_Raid", 00:15:32.485 "aliases": [ 00:15:32.485 "69d31e83-c6e1-4437-ad53-f6312aa6a04a" 00:15:32.485 ], 00:15:32.485 "product_name": "Raid Volume", 00:15:32.485 "block_size": 512, 00:15:32.485 "num_blocks": 126976, 00:15:32.485 "uuid": "69d31e83-c6e1-4437-ad53-f6312aa6a04a", 00:15:32.485 "assigned_rate_limits": { 00:15:32.485 "rw_ios_per_sec": 0, 00:15:32.485 "rw_mbytes_per_sec": 0, 00:15:32.485 "r_mbytes_per_sec": 0, 00:15:32.485 "w_mbytes_per_sec": 0 00:15:32.485 }, 00:15:32.485 "claimed": false, 00:15:32.485 "zoned": false, 00:15:32.485 "supported_io_types": { 00:15:32.485 "read": true, 00:15:32.485 "write": true, 00:15:32.485 "unmap": false, 00:15:32.485 "flush": false, 00:15:32.485 "reset": true, 00:15:32.485 "nvme_admin": false, 00:15:32.485 "nvme_io": false, 00:15:32.485 "nvme_io_md": false, 00:15:32.485 "write_zeroes": true, 00:15:32.485 "zcopy": false, 00:15:32.485 "get_zone_info": false, 00:15:32.485 "zone_management": false, 00:15:32.485 "zone_append": false, 00:15:32.485 "compare": false, 00:15:32.485 "compare_and_write": false, 00:15:32.485 "abort": false, 00:15:32.485 "seek_hole": false, 00:15:32.485 "seek_data": false, 00:15:32.485 "copy": false, 00:15:32.485 "nvme_iov_md": false 00:15:32.485 }, 00:15:32.485 "driver_specific": { 00:15:32.485 "raid": { 00:15:32.485 "uuid": "69d31e83-c6e1-4437-ad53-f6312aa6a04a", 00:15:32.485 "strip_size_kb": 64, 00:15:32.485 "state": "online", 00:15:32.485 "raid_level": "raid5f", 00:15:32.485 "superblock": true, 00:15:32.485 "num_base_bdevs": 3, 00:15:32.485 "num_base_bdevs_discovered": 3, 00:15:32.485 "num_base_bdevs_operational": 3, 00:15:32.485 "base_bdevs_list": [ 00:15:32.485 { 00:15:32.485 "name": "BaseBdev1", 00:15:32.485 "uuid": "0e42c809-88e0-4354-a8d4-f270f6090fdb", 00:15:32.485 "is_configured": true, 00:15:32.485 "data_offset": 2048, 00:15:32.485 "data_size": 63488 00:15:32.485 }, 00:15:32.485 { 00:15:32.485 "name": "BaseBdev2", 00:15:32.485 "uuid": "30b15390-50a5-4f19-87c2-ea76ceebceca", 00:15:32.485 "is_configured": true, 00:15:32.485 "data_offset": 2048, 00:15:32.485 "data_size": 63488 00:15:32.485 }, 00:15:32.485 { 00:15:32.485 "name": "BaseBdev3", 00:15:32.485 "uuid": "a1af4283-976d-4df9-ae08-bfe05f882242", 00:15:32.485 "is_configured": true, 00:15:32.485 "data_offset": 2048, 00:15:32.485 "data_size": 63488 00:15:32.485 } 00:15:32.485 ] 00:15:32.485 } 00:15:32.485 } 00:15:32.485 }' 00:15:32.485 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:32.744 BaseBdev2 00:15:32.744 BaseBdev3' 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.744 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.744 [2024-11-26 18:00:14.584711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.005 "name": "Existed_Raid", 00:15:33.005 "uuid": "69d31e83-c6e1-4437-ad53-f6312aa6a04a", 00:15:33.005 "strip_size_kb": 64, 00:15:33.005 "state": "online", 00:15:33.005 "raid_level": "raid5f", 00:15:33.005 "superblock": true, 00:15:33.005 "num_base_bdevs": 3, 00:15:33.005 "num_base_bdevs_discovered": 2, 00:15:33.005 "num_base_bdevs_operational": 2, 00:15:33.005 "base_bdevs_list": [ 00:15:33.005 { 00:15:33.005 "name": null, 00:15:33.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.005 "is_configured": false, 00:15:33.005 "data_offset": 0, 00:15:33.005 "data_size": 63488 00:15:33.005 }, 00:15:33.005 { 00:15:33.005 "name": "BaseBdev2", 00:15:33.005 "uuid": "30b15390-50a5-4f19-87c2-ea76ceebceca", 00:15:33.005 "is_configured": true, 00:15:33.005 "data_offset": 2048, 00:15:33.005 "data_size": 63488 00:15:33.005 }, 00:15:33.005 { 00:15:33.005 "name": "BaseBdev3", 00:15:33.005 "uuid": "a1af4283-976d-4df9-ae08-bfe05f882242", 00:15:33.005 "is_configured": true, 00:15:33.005 "data_offset": 2048, 00:15:33.005 "data_size": 63488 00:15:33.005 } 00:15:33.005 ] 00:15:33.005 }' 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.005 18:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.264 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:33.264 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:33.264 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:33.264 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.264 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.264 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.523 [2024-11-26 18:00:15.168944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:33.523 [2024-11-26 18:00:15.169234] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.523 [2024-11-26 18:00:15.284190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.523 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.523 [2024-11-26 18:00:15.340150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:33.523 [2024-11-26 18:00:15.340269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.784 BaseBdev2 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.784 [ 00:15:33.784 { 00:15:33.784 "name": "BaseBdev2", 00:15:33.784 "aliases": [ 00:15:33.784 "5da29837-36cc-4dab-bd01-18049ccea363" 00:15:33.784 ], 00:15:33.784 "product_name": "Malloc disk", 00:15:33.784 "block_size": 512, 00:15:33.784 "num_blocks": 65536, 00:15:33.784 "uuid": "5da29837-36cc-4dab-bd01-18049ccea363", 00:15:33.784 "assigned_rate_limits": { 00:15:33.784 "rw_ios_per_sec": 0, 00:15:33.784 "rw_mbytes_per_sec": 0, 00:15:33.784 "r_mbytes_per_sec": 0, 00:15:33.784 "w_mbytes_per_sec": 0 00:15:33.784 }, 00:15:33.784 "claimed": false, 00:15:33.784 "zoned": false, 00:15:33.784 "supported_io_types": { 00:15:33.784 "read": true, 00:15:33.784 "write": true, 00:15:33.784 "unmap": true, 00:15:33.784 "flush": true, 00:15:33.784 "reset": true, 00:15:33.784 "nvme_admin": false, 00:15:33.784 "nvme_io": false, 00:15:33.784 "nvme_io_md": false, 00:15:33.784 "write_zeroes": true, 00:15:33.784 "zcopy": true, 00:15:33.784 "get_zone_info": false, 00:15:33.784 "zone_management": false, 00:15:33.784 "zone_append": false, 00:15:33.784 "compare": false, 00:15:33.784 "compare_and_write": false, 00:15:33.784 "abort": true, 00:15:33.784 "seek_hole": false, 00:15:33.784 "seek_data": false, 00:15:33.784 "copy": true, 00:15:33.784 "nvme_iov_md": false 00:15:33.784 }, 00:15:33.784 "memory_domains": [ 00:15:33.784 { 00:15:33.784 "dma_device_id": "system", 00:15:33.784 "dma_device_type": 1 00:15:33.784 }, 00:15:33.784 { 00:15:33.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.784 "dma_device_type": 2 00:15:33.784 } 00:15:33.784 ], 00:15:33.784 "driver_specific": {} 00:15:33.784 } 00:15:33.784 ] 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:33.784 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.785 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.785 BaseBdev3 00:15:33.785 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.785 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:33.785 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:33.785 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:33.785 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:33.785 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:33.785 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:33.785 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.045 [ 00:15:34.045 { 00:15:34.045 "name": "BaseBdev3", 00:15:34.045 "aliases": [ 00:15:34.045 "ab051318-c851-43eb-aa32-2040e119e42d" 00:15:34.045 ], 00:15:34.045 "product_name": "Malloc disk", 00:15:34.045 "block_size": 512, 00:15:34.045 "num_blocks": 65536, 00:15:34.045 "uuid": "ab051318-c851-43eb-aa32-2040e119e42d", 00:15:34.045 "assigned_rate_limits": { 00:15:34.045 "rw_ios_per_sec": 0, 00:15:34.045 "rw_mbytes_per_sec": 0, 00:15:34.045 "r_mbytes_per_sec": 0, 00:15:34.045 "w_mbytes_per_sec": 0 00:15:34.045 }, 00:15:34.045 "claimed": false, 00:15:34.045 "zoned": false, 00:15:34.045 "supported_io_types": { 00:15:34.045 "read": true, 00:15:34.045 "write": true, 00:15:34.045 "unmap": true, 00:15:34.045 "flush": true, 00:15:34.045 "reset": true, 00:15:34.045 "nvme_admin": false, 00:15:34.045 "nvme_io": false, 00:15:34.045 "nvme_io_md": false, 00:15:34.045 "write_zeroes": true, 00:15:34.045 "zcopy": true, 00:15:34.045 "get_zone_info": false, 00:15:34.045 "zone_management": false, 00:15:34.045 "zone_append": false, 00:15:34.045 "compare": false, 00:15:34.045 "compare_and_write": false, 00:15:34.045 "abort": true, 00:15:34.045 "seek_hole": false, 00:15:34.045 "seek_data": false, 00:15:34.045 "copy": true, 00:15:34.045 "nvme_iov_md": false 00:15:34.045 }, 00:15:34.045 "memory_domains": [ 00:15:34.045 { 00:15:34.045 "dma_device_id": "system", 00:15:34.045 "dma_device_type": 1 00:15:34.045 }, 00:15:34.045 { 00:15:34.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.045 "dma_device_type": 2 00:15:34.045 } 00:15:34.045 ], 00:15:34.045 "driver_specific": {} 00:15:34.045 } 00:15:34.045 ] 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.045 [2024-11-26 18:00:15.688232] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.045 [2024-11-26 18:00:15.688363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.045 [2024-11-26 18:00:15.688440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.045 [2024-11-26 18:00:15.690668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.045 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.045 "name": "Existed_Raid", 00:15:34.045 "uuid": "3966a262-e3fd-435e-917b-ebbf2cff4f8c", 00:15:34.045 "strip_size_kb": 64, 00:15:34.045 "state": "configuring", 00:15:34.045 "raid_level": "raid5f", 00:15:34.045 "superblock": true, 00:15:34.045 "num_base_bdevs": 3, 00:15:34.045 "num_base_bdevs_discovered": 2, 00:15:34.045 "num_base_bdevs_operational": 3, 00:15:34.045 "base_bdevs_list": [ 00:15:34.045 { 00:15:34.045 "name": "BaseBdev1", 00:15:34.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.046 "is_configured": false, 00:15:34.046 "data_offset": 0, 00:15:34.046 "data_size": 0 00:15:34.046 }, 00:15:34.046 { 00:15:34.046 "name": "BaseBdev2", 00:15:34.046 "uuid": "5da29837-36cc-4dab-bd01-18049ccea363", 00:15:34.046 "is_configured": true, 00:15:34.046 "data_offset": 2048, 00:15:34.046 "data_size": 63488 00:15:34.046 }, 00:15:34.046 { 00:15:34.046 "name": "BaseBdev3", 00:15:34.046 "uuid": "ab051318-c851-43eb-aa32-2040e119e42d", 00:15:34.046 "is_configured": true, 00:15:34.046 "data_offset": 2048, 00:15:34.046 "data_size": 63488 00:15:34.046 } 00:15:34.046 ] 00:15:34.046 }' 00:15:34.046 18:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.046 18:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.306 [2024-11-26 18:00:16.143509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.306 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.566 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.566 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.566 "name": "Existed_Raid", 00:15:34.566 "uuid": "3966a262-e3fd-435e-917b-ebbf2cff4f8c", 00:15:34.566 "strip_size_kb": 64, 00:15:34.566 "state": "configuring", 00:15:34.566 "raid_level": "raid5f", 00:15:34.566 "superblock": true, 00:15:34.566 "num_base_bdevs": 3, 00:15:34.566 "num_base_bdevs_discovered": 1, 00:15:34.566 "num_base_bdevs_operational": 3, 00:15:34.566 "base_bdevs_list": [ 00:15:34.566 { 00:15:34.566 "name": "BaseBdev1", 00:15:34.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.566 "is_configured": false, 00:15:34.566 "data_offset": 0, 00:15:34.566 "data_size": 0 00:15:34.566 }, 00:15:34.566 { 00:15:34.566 "name": null, 00:15:34.566 "uuid": "5da29837-36cc-4dab-bd01-18049ccea363", 00:15:34.566 "is_configured": false, 00:15:34.566 "data_offset": 0, 00:15:34.566 "data_size": 63488 00:15:34.566 }, 00:15:34.566 { 00:15:34.566 "name": "BaseBdev3", 00:15:34.566 "uuid": "ab051318-c851-43eb-aa32-2040e119e42d", 00:15:34.566 "is_configured": true, 00:15:34.566 "data_offset": 2048, 00:15:34.566 "data_size": 63488 00:15:34.566 } 00:15:34.566 ] 00:15:34.566 }' 00:15:34.566 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.566 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.826 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.826 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.826 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.826 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:34.826 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.826 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:34.826 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:34.826 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.826 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.086 [2024-11-26 18:00:16.729562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.086 BaseBdev1 00:15:35.086 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.086 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:35.086 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:35.086 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.086 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:35.086 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.086 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.086 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.086 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.086 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.086 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.086 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:35.086 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.086 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.086 [ 00:15:35.086 { 00:15:35.086 "name": "BaseBdev1", 00:15:35.086 "aliases": [ 00:15:35.086 "ed8410e4-8122-4fc4-be05-992c086a3819" 00:15:35.086 ], 00:15:35.086 "product_name": "Malloc disk", 00:15:35.086 "block_size": 512, 00:15:35.086 "num_blocks": 65536, 00:15:35.086 "uuid": "ed8410e4-8122-4fc4-be05-992c086a3819", 00:15:35.086 "assigned_rate_limits": { 00:15:35.086 "rw_ios_per_sec": 0, 00:15:35.086 "rw_mbytes_per_sec": 0, 00:15:35.086 "r_mbytes_per_sec": 0, 00:15:35.086 "w_mbytes_per_sec": 0 00:15:35.086 }, 00:15:35.086 "claimed": true, 00:15:35.086 "claim_type": "exclusive_write", 00:15:35.086 "zoned": false, 00:15:35.086 "supported_io_types": { 00:15:35.086 "read": true, 00:15:35.086 "write": true, 00:15:35.086 "unmap": true, 00:15:35.086 "flush": true, 00:15:35.087 "reset": true, 00:15:35.087 "nvme_admin": false, 00:15:35.087 "nvme_io": false, 00:15:35.087 "nvme_io_md": false, 00:15:35.087 "write_zeroes": true, 00:15:35.087 "zcopy": true, 00:15:35.087 "get_zone_info": false, 00:15:35.087 "zone_management": false, 00:15:35.087 "zone_append": false, 00:15:35.087 "compare": false, 00:15:35.087 "compare_and_write": false, 00:15:35.087 "abort": true, 00:15:35.087 "seek_hole": false, 00:15:35.087 "seek_data": false, 00:15:35.087 "copy": true, 00:15:35.087 "nvme_iov_md": false 00:15:35.087 }, 00:15:35.087 "memory_domains": [ 00:15:35.087 { 00:15:35.087 "dma_device_id": "system", 00:15:35.087 "dma_device_type": 1 00:15:35.087 }, 00:15:35.087 { 00:15:35.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.087 "dma_device_type": 2 00:15:35.087 } 00:15:35.087 ], 00:15:35.087 "driver_specific": {} 00:15:35.087 } 00:15:35.087 ] 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.087 "name": "Existed_Raid", 00:15:35.087 "uuid": "3966a262-e3fd-435e-917b-ebbf2cff4f8c", 00:15:35.087 "strip_size_kb": 64, 00:15:35.087 "state": "configuring", 00:15:35.087 "raid_level": "raid5f", 00:15:35.087 "superblock": true, 00:15:35.087 "num_base_bdevs": 3, 00:15:35.087 "num_base_bdevs_discovered": 2, 00:15:35.087 "num_base_bdevs_operational": 3, 00:15:35.087 "base_bdevs_list": [ 00:15:35.087 { 00:15:35.087 "name": "BaseBdev1", 00:15:35.087 "uuid": "ed8410e4-8122-4fc4-be05-992c086a3819", 00:15:35.087 "is_configured": true, 00:15:35.087 "data_offset": 2048, 00:15:35.087 "data_size": 63488 00:15:35.087 }, 00:15:35.087 { 00:15:35.087 "name": null, 00:15:35.087 "uuid": "5da29837-36cc-4dab-bd01-18049ccea363", 00:15:35.087 "is_configured": false, 00:15:35.087 "data_offset": 0, 00:15:35.087 "data_size": 63488 00:15:35.087 }, 00:15:35.087 { 00:15:35.087 "name": "BaseBdev3", 00:15:35.087 "uuid": "ab051318-c851-43eb-aa32-2040e119e42d", 00:15:35.087 "is_configured": true, 00:15:35.087 "data_offset": 2048, 00:15:35.087 "data_size": 63488 00:15:35.087 } 00:15:35.087 ] 00:15:35.087 }' 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.087 18:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.654 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.654 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.654 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.654 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:35.654 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.654 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:35.654 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.655 [2024-11-26 18:00:17.277056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.655 "name": "Existed_Raid", 00:15:35.655 "uuid": "3966a262-e3fd-435e-917b-ebbf2cff4f8c", 00:15:35.655 "strip_size_kb": 64, 00:15:35.655 "state": "configuring", 00:15:35.655 "raid_level": "raid5f", 00:15:35.655 "superblock": true, 00:15:35.655 "num_base_bdevs": 3, 00:15:35.655 "num_base_bdevs_discovered": 1, 00:15:35.655 "num_base_bdevs_operational": 3, 00:15:35.655 "base_bdevs_list": [ 00:15:35.655 { 00:15:35.655 "name": "BaseBdev1", 00:15:35.655 "uuid": "ed8410e4-8122-4fc4-be05-992c086a3819", 00:15:35.655 "is_configured": true, 00:15:35.655 "data_offset": 2048, 00:15:35.655 "data_size": 63488 00:15:35.655 }, 00:15:35.655 { 00:15:35.655 "name": null, 00:15:35.655 "uuid": "5da29837-36cc-4dab-bd01-18049ccea363", 00:15:35.655 "is_configured": false, 00:15:35.655 "data_offset": 0, 00:15:35.655 "data_size": 63488 00:15:35.655 }, 00:15:35.655 { 00:15:35.655 "name": null, 00:15:35.655 "uuid": "ab051318-c851-43eb-aa32-2040e119e42d", 00:15:35.655 "is_configured": false, 00:15:35.655 "data_offset": 0, 00:15:35.655 "data_size": 63488 00:15:35.655 } 00:15:35.655 ] 00:15:35.655 }' 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.655 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.222 [2024-11-26 18:00:17.832158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.222 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.223 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.223 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.223 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.223 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.223 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.223 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.223 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.223 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.223 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.223 "name": "Existed_Raid", 00:15:36.223 "uuid": "3966a262-e3fd-435e-917b-ebbf2cff4f8c", 00:15:36.223 "strip_size_kb": 64, 00:15:36.223 "state": "configuring", 00:15:36.223 "raid_level": "raid5f", 00:15:36.223 "superblock": true, 00:15:36.223 "num_base_bdevs": 3, 00:15:36.223 "num_base_bdevs_discovered": 2, 00:15:36.223 "num_base_bdevs_operational": 3, 00:15:36.223 "base_bdevs_list": [ 00:15:36.223 { 00:15:36.223 "name": "BaseBdev1", 00:15:36.223 "uuid": "ed8410e4-8122-4fc4-be05-992c086a3819", 00:15:36.223 "is_configured": true, 00:15:36.223 "data_offset": 2048, 00:15:36.223 "data_size": 63488 00:15:36.223 }, 00:15:36.223 { 00:15:36.223 "name": null, 00:15:36.223 "uuid": "5da29837-36cc-4dab-bd01-18049ccea363", 00:15:36.223 "is_configured": false, 00:15:36.223 "data_offset": 0, 00:15:36.223 "data_size": 63488 00:15:36.223 }, 00:15:36.223 { 00:15:36.223 "name": "BaseBdev3", 00:15:36.223 "uuid": "ab051318-c851-43eb-aa32-2040e119e42d", 00:15:36.223 "is_configured": true, 00:15:36.223 "data_offset": 2048, 00:15:36.223 "data_size": 63488 00:15:36.223 } 00:15:36.223 ] 00:15:36.223 }' 00:15:36.223 18:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.223 18:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.482 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.482 18:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.482 18:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.482 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:36.482 18:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.741 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:36.741 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:36.741 18:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.742 [2024-11-26 18:00:18.371299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.742 "name": "Existed_Raid", 00:15:36.742 "uuid": "3966a262-e3fd-435e-917b-ebbf2cff4f8c", 00:15:36.742 "strip_size_kb": 64, 00:15:36.742 "state": "configuring", 00:15:36.742 "raid_level": "raid5f", 00:15:36.742 "superblock": true, 00:15:36.742 "num_base_bdevs": 3, 00:15:36.742 "num_base_bdevs_discovered": 1, 00:15:36.742 "num_base_bdevs_operational": 3, 00:15:36.742 "base_bdevs_list": [ 00:15:36.742 { 00:15:36.742 "name": null, 00:15:36.742 "uuid": "ed8410e4-8122-4fc4-be05-992c086a3819", 00:15:36.742 "is_configured": false, 00:15:36.742 "data_offset": 0, 00:15:36.742 "data_size": 63488 00:15:36.742 }, 00:15:36.742 { 00:15:36.742 "name": null, 00:15:36.742 "uuid": "5da29837-36cc-4dab-bd01-18049ccea363", 00:15:36.742 "is_configured": false, 00:15:36.742 "data_offset": 0, 00:15:36.742 "data_size": 63488 00:15:36.742 }, 00:15:36.742 { 00:15:36.742 "name": "BaseBdev3", 00:15:36.742 "uuid": "ab051318-c851-43eb-aa32-2040e119e42d", 00:15:36.742 "is_configured": true, 00:15:36.742 "data_offset": 2048, 00:15:36.742 "data_size": 63488 00:15:36.742 } 00:15:36.742 ] 00:15:36.742 }' 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.742 18:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.311 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:37.311 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.311 18:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.311 18:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.311 18:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.311 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:37.311 18:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.311 [2024-11-26 18:00:19.005815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.311 "name": "Existed_Raid", 00:15:37.311 "uuid": "3966a262-e3fd-435e-917b-ebbf2cff4f8c", 00:15:37.311 "strip_size_kb": 64, 00:15:37.311 "state": "configuring", 00:15:37.311 "raid_level": "raid5f", 00:15:37.311 "superblock": true, 00:15:37.311 "num_base_bdevs": 3, 00:15:37.311 "num_base_bdevs_discovered": 2, 00:15:37.311 "num_base_bdevs_operational": 3, 00:15:37.311 "base_bdevs_list": [ 00:15:37.311 { 00:15:37.311 "name": null, 00:15:37.311 "uuid": "ed8410e4-8122-4fc4-be05-992c086a3819", 00:15:37.311 "is_configured": false, 00:15:37.311 "data_offset": 0, 00:15:37.311 "data_size": 63488 00:15:37.311 }, 00:15:37.311 { 00:15:37.311 "name": "BaseBdev2", 00:15:37.311 "uuid": "5da29837-36cc-4dab-bd01-18049ccea363", 00:15:37.311 "is_configured": true, 00:15:37.311 "data_offset": 2048, 00:15:37.311 "data_size": 63488 00:15:37.311 }, 00:15:37.311 { 00:15:37.311 "name": "BaseBdev3", 00:15:37.311 "uuid": "ab051318-c851-43eb-aa32-2040e119e42d", 00:15:37.311 "is_configured": true, 00:15:37.311 "data_offset": 2048, 00:15:37.311 "data_size": 63488 00:15:37.311 } 00:15:37.311 ] 00:15:37.311 }' 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.311 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ed8410e4-8122-4fc4-be05-992c086a3819 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.878 [2024-11-26 18:00:19.618258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:37.878 [2024-11-26 18:00:19.618636] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:37.878 [2024-11-26 18:00:19.618697] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:37.878 [2024-11-26 18:00:19.619030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:37.878 NewBaseBdev 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.878 [2024-11-26 18:00:19.625489] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:37.878 [2024-11-26 18:00:19.625568] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:37.878 [2024-11-26 18:00:19.625849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.878 [ 00:15:37.878 { 00:15:37.878 "name": "NewBaseBdev", 00:15:37.878 "aliases": [ 00:15:37.878 "ed8410e4-8122-4fc4-be05-992c086a3819" 00:15:37.878 ], 00:15:37.878 "product_name": "Malloc disk", 00:15:37.878 "block_size": 512, 00:15:37.878 "num_blocks": 65536, 00:15:37.878 "uuid": "ed8410e4-8122-4fc4-be05-992c086a3819", 00:15:37.878 "assigned_rate_limits": { 00:15:37.878 "rw_ios_per_sec": 0, 00:15:37.878 "rw_mbytes_per_sec": 0, 00:15:37.878 "r_mbytes_per_sec": 0, 00:15:37.878 "w_mbytes_per_sec": 0 00:15:37.878 }, 00:15:37.878 "claimed": true, 00:15:37.878 "claim_type": "exclusive_write", 00:15:37.878 "zoned": false, 00:15:37.878 "supported_io_types": { 00:15:37.878 "read": true, 00:15:37.878 "write": true, 00:15:37.878 "unmap": true, 00:15:37.878 "flush": true, 00:15:37.878 "reset": true, 00:15:37.878 "nvme_admin": false, 00:15:37.878 "nvme_io": false, 00:15:37.878 "nvme_io_md": false, 00:15:37.878 "write_zeroes": true, 00:15:37.878 "zcopy": true, 00:15:37.878 "get_zone_info": false, 00:15:37.878 "zone_management": false, 00:15:37.878 "zone_append": false, 00:15:37.878 "compare": false, 00:15:37.878 "compare_and_write": false, 00:15:37.878 "abort": true, 00:15:37.878 "seek_hole": false, 00:15:37.878 "seek_data": false, 00:15:37.878 "copy": true, 00:15:37.878 "nvme_iov_md": false 00:15:37.878 }, 00:15:37.878 "memory_domains": [ 00:15:37.878 { 00:15:37.878 "dma_device_id": "system", 00:15:37.878 "dma_device_type": 1 00:15:37.878 }, 00:15:37.878 { 00:15:37.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.878 "dma_device_type": 2 00:15:37.878 } 00:15:37.878 ], 00:15:37.878 "driver_specific": {} 00:15:37.878 } 00:15:37.878 ] 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.878 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.878 "name": "Existed_Raid", 00:15:37.879 "uuid": "3966a262-e3fd-435e-917b-ebbf2cff4f8c", 00:15:37.879 "strip_size_kb": 64, 00:15:37.879 "state": "online", 00:15:37.879 "raid_level": "raid5f", 00:15:37.879 "superblock": true, 00:15:37.879 "num_base_bdevs": 3, 00:15:37.879 "num_base_bdevs_discovered": 3, 00:15:37.879 "num_base_bdevs_operational": 3, 00:15:37.879 "base_bdevs_list": [ 00:15:37.879 { 00:15:37.879 "name": "NewBaseBdev", 00:15:37.879 "uuid": "ed8410e4-8122-4fc4-be05-992c086a3819", 00:15:37.879 "is_configured": true, 00:15:37.879 "data_offset": 2048, 00:15:37.879 "data_size": 63488 00:15:37.879 }, 00:15:37.879 { 00:15:37.879 "name": "BaseBdev2", 00:15:37.879 "uuid": "5da29837-36cc-4dab-bd01-18049ccea363", 00:15:37.879 "is_configured": true, 00:15:37.879 "data_offset": 2048, 00:15:37.879 "data_size": 63488 00:15:37.879 }, 00:15:37.879 { 00:15:37.879 "name": "BaseBdev3", 00:15:37.879 "uuid": "ab051318-c851-43eb-aa32-2040e119e42d", 00:15:37.879 "is_configured": true, 00:15:37.879 "data_offset": 2048, 00:15:37.879 "data_size": 63488 00:15:37.879 } 00:15:37.879 ] 00:15:37.879 }' 00:15:37.879 18:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.879 18:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.475 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:38.475 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:38.475 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:38.475 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:38.475 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:38.475 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:38.475 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:38.475 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:38.475 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.475 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.475 [2024-11-26 18:00:20.136560] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.475 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.475 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:38.475 "name": "Existed_Raid", 00:15:38.475 "aliases": [ 00:15:38.475 "3966a262-e3fd-435e-917b-ebbf2cff4f8c" 00:15:38.475 ], 00:15:38.475 "product_name": "Raid Volume", 00:15:38.475 "block_size": 512, 00:15:38.475 "num_blocks": 126976, 00:15:38.475 "uuid": "3966a262-e3fd-435e-917b-ebbf2cff4f8c", 00:15:38.475 "assigned_rate_limits": { 00:15:38.475 "rw_ios_per_sec": 0, 00:15:38.475 "rw_mbytes_per_sec": 0, 00:15:38.475 "r_mbytes_per_sec": 0, 00:15:38.475 "w_mbytes_per_sec": 0 00:15:38.475 }, 00:15:38.475 "claimed": false, 00:15:38.475 "zoned": false, 00:15:38.475 "supported_io_types": { 00:15:38.475 "read": true, 00:15:38.475 "write": true, 00:15:38.476 "unmap": false, 00:15:38.476 "flush": false, 00:15:38.476 "reset": true, 00:15:38.476 "nvme_admin": false, 00:15:38.476 "nvme_io": false, 00:15:38.476 "nvme_io_md": false, 00:15:38.476 "write_zeroes": true, 00:15:38.476 "zcopy": false, 00:15:38.476 "get_zone_info": false, 00:15:38.476 "zone_management": false, 00:15:38.476 "zone_append": false, 00:15:38.476 "compare": false, 00:15:38.476 "compare_and_write": false, 00:15:38.476 "abort": false, 00:15:38.476 "seek_hole": false, 00:15:38.476 "seek_data": false, 00:15:38.476 "copy": false, 00:15:38.476 "nvme_iov_md": false 00:15:38.476 }, 00:15:38.476 "driver_specific": { 00:15:38.476 "raid": { 00:15:38.476 "uuid": "3966a262-e3fd-435e-917b-ebbf2cff4f8c", 00:15:38.476 "strip_size_kb": 64, 00:15:38.476 "state": "online", 00:15:38.476 "raid_level": "raid5f", 00:15:38.476 "superblock": true, 00:15:38.476 "num_base_bdevs": 3, 00:15:38.476 "num_base_bdevs_discovered": 3, 00:15:38.476 "num_base_bdevs_operational": 3, 00:15:38.476 "base_bdevs_list": [ 00:15:38.476 { 00:15:38.476 "name": "NewBaseBdev", 00:15:38.476 "uuid": "ed8410e4-8122-4fc4-be05-992c086a3819", 00:15:38.476 "is_configured": true, 00:15:38.476 "data_offset": 2048, 00:15:38.476 "data_size": 63488 00:15:38.476 }, 00:15:38.476 { 00:15:38.476 "name": "BaseBdev2", 00:15:38.476 "uuid": "5da29837-36cc-4dab-bd01-18049ccea363", 00:15:38.476 "is_configured": true, 00:15:38.476 "data_offset": 2048, 00:15:38.476 "data_size": 63488 00:15:38.476 }, 00:15:38.476 { 00:15:38.476 "name": "BaseBdev3", 00:15:38.476 "uuid": "ab051318-c851-43eb-aa32-2040e119e42d", 00:15:38.476 "is_configured": true, 00:15:38.476 "data_offset": 2048, 00:15:38.476 "data_size": 63488 00:15:38.476 } 00:15:38.476 ] 00:15:38.476 } 00:15:38.476 } 00:15:38.476 }' 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:38.476 BaseBdev2 00:15:38.476 BaseBdev3' 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.476 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.734 [2024-11-26 18:00:20.439857] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:38.734 [2024-11-26 18:00:20.439989] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.734 [2024-11-26 18:00:20.440146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.734 [2024-11-26 18:00:20.440523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.734 [2024-11-26 18:00:20.440593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80897 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80897 ']' 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80897 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80897 00:15:38.734 killing process with pid 80897 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80897' 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80897 00:15:38.734 [2024-11-26 18:00:20.484261] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:38.734 18:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80897 00:15:38.992 [2024-11-26 18:00:20.841199] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:40.365 18:00:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:40.365 00:15:40.365 real 0m11.348s 00:15:40.365 user 0m17.957s 00:15:40.365 sys 0m1.844s 00:15:40.365 18:00:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.365 ************************************ 00:15:40.365 END TEST raid5f_state_function_test_sb 00:15:40.365 ************************************ 00:15:40.365 18:00:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.365 18:00:22 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:40.365 18:00:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:40.365 18:00:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.365 18:00:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:40.623 ************************************ 00:15:40.623 START TEST raid5f_superblock_test 00:15:40.623 ************************************ 00:15:40.623 18:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:40.623 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:40.623 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:40.623 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:40.623 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:40.623 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:40.623 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81523 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81523 00:15:40.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81523 ']' 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.624 18:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.624 [2024-11-26 18:00:22.333869] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:15:40.624 [2024-11-26 18:00:22.334118] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81523 ] 00:15:40.882 [2024-11-26 18:00:22.515191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.882 [2024-11-26 18:00:22.652000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.140 [2024-11-26 18:00:22.891164] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.140 [2024-11-26 18:00:22.891308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.704 malloc1 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.704 [2024-11-26 18:00:23.320681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:41.704 [2024-11-26 18:00:23.320879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.704 [2024-11-26 18:00:23.320956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:41.704 [2024-11-26 18:00:23.321010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.704 [2024-11-26 18:00:23.323737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.704 [2024-11-26 18:00:23.323795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:41.704 pt1 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.704 malloc2 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.704 [2024-11-26 18:00:23.383446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:41.704 [2024-11-26 18:00:23.383606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.704 [2024-11-26 18:00:23.383672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:41.704 [2024-11-26 18:00:23.383711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.704 [2024-11-26 18:00:23.386327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.704 [2024-11-26 18:00:23.386438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:41.704 pt2 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.704 malloc3 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.704 [2024-11-26 18:00:23.461893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:41.704 [2024-11-26 18:00:23.462076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.704 [2024-11-26 18:00:23.462130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:41.704 [2024-11-26 18:00:23.462167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.704 [2024-11-26 18:00:23.464748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.704 [2024-11-26 18:00:23.464858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:41.704 pt3 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:41.704 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.705 [2024-11-26 18:00:23.473967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:41.705 [2024-11-26 18:00:23.476230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:41.705 [2024-11-26 18:00:23.476382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:41.705 [2024-11-26 18:00:23.476640] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:41.705 [2024-11-26 18:00:23.476708] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:41.705 [2024-11-26 18:00:23.477082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:41.705 [2024-11-26 18:00:23.484353] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:41.705 [2024-11-26 18:00:23.484448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:41.705 [2024-11-26 18:00:23.484811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.705 "name": "raid_bdev1", 00:15:41.705 "uuid": "1dcac61f-2b3c-4de8-87aa-d0fe4a100d30", 00:15:41.705 "strip_size_kb": 64, 00:15:41.705 "state": "online", 00:15:41.705 "raid_level": "raid5f", 00:15:41.705 "superblock": true, 00:15:41.705 "num_base_bdevs": 3, 00:15:41.705 "num_base_bdevs_discovered": 3, 00:15:41.705 "num_base_bdevs_operational": 3, 00:15:41.705 "base_bdevs_list": [ 00:15:41.705 { 00:15:41.705 "name": "pt1", 00:15:41.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:41.705 "is_configured": true, 00:15:41.705 "data_offset": 2048, 00:15:41.705 "data_size": 63488 00:15:41.705 }, 00:15:41.705 { 00:15:41.705 "name": "pt2", 00:15:41.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.705 "is_configured": true, 00:15:41.705 "data_offset": 2048, 00:15:41.705 "data_size": 63488 00:15:41.705 }, 00:15:41.705 { 00:15:41.705 "name": "pt3", 00:15:41.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:41.705 "is_configured": true, 00:15:41.705 "data_offset": 2048, 00:15:41.705 "data_size": 63488 00:15:41.705 } 00:15:41.705 ] 00:15:41.705 }' 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.705 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.271 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:42.271 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:42.271 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:42.271 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:42.271 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:42.271 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:42.271 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:42.271 18:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:42.271 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.271 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.272 [2024-11-26 18:00:23.967966] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.272 18:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.272 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:42.272 "name": "raid_bdev1", 00:15:42.272 "aliases": [ 00:15:42.272 "1dcac61f-2b3c-4de8-87aa-d0fe4a100d30" 00:15:42.272 ], 00:15:42.272 "product_name": "Raid Volume", 00:15:42.272 "block_size": 512, 00:15:42.272 "num_blocks": 126976, 00:15:42.272 "uuid": "1dcac61f-2b3c-4de8-87aa-d0fe4a100d30", 00:15:42.272 "assigned_rate_limits": { 00:15:42.272 "rw_ios_per_sec": 0, 00:15:42.272 "rw_mbytes_per_sec": 0, 00:15:42.272 "r_mbytes_per_sec": 0, 00:15:42.272 "w_mbytes_per_sec": 0 00:15:42.272 }, 00:15:42.272 "claimed": false, 00:15:42.272 "zoned": false, 00:15:42.272 "supported_io_types": { 00:15:42.272 "read": true, 00:15:42.272 "write": true, 00:15:42.272 "unmap": false, 00:15:42.272 "flush": false, 00:15:42.272 "reset": true, 00:15:42.272 "nvme_admin": false, 00:15:42.272 "nvme_io": false, 00:15:42.272 "nvme_io_md": false, 00:15:42.272 "write_zeroes": true, 00:15:42.272 "zcopy": false, 00:15:42.272 "get_zone_info": false, 00:15:42.272 "zone_management": false, 00:15:42.272 "zone_append": false, 00:15:42.272 "compare": false, 00:15:42.272 "compare_and_write": false, 00:15:42.272 "abort": false, 00:15:42.272 "seek_hole": false, 00:15:42.272 "seek_data": false, 00:15:42.272 "copy": false, 00:15:42.272 "nvme_iov_md": false 00:15:42.272 }, 00:15:42.272 "driver_specific": { 00:15:42.272 "raid": { 00:15:42.272 "uuid": "1dcac61f-2b3c-4de8-87aa-d0fe4a100d30", 00:15:42.272 "strip_size_kb": 64, 00:15:42.272 "state": "online", 00:15:42.272 "raid_level": "raid5f", 00:15:42.272 "superblock": true, 00:15:42.272 "num_base_bdevs": 3, 00:15:42.272 "num_base_bdevs_discovered": 3, 00:15:42.272 "num_base_bdevs_operational": 3, 00:15:42.272 "base_bdevs_list": [ 00:15:42.272 { 00:15:42.272 "name": "pt1", 00:15:42.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:42.272 "is_configured": true, 00:15:42.272 "data_offset": 2048, 00:15:42.272 "data_size": 63488 00:15:42.272 }, 00:15:42.272 { 00:15:42.272 "name": "pt2", 00:15:42.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:42.272 "is_configured": true, 00:15:42.272 "data_offset": 2048, 00:15:42.272 "data_size": 63488 00:15:42.272 }, 00:15:42.272 { 00:15:42.272 "name": "pt3", 00:15:42.272 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:42.272 "is_configured": true, 00:15:42.272 "data_offset": 2048, 00:15:42.272 "data_size": 63488 00:15:42.272 } 00:15:42.272 ] 00:15:42.272 } 00:15:42.272 } 00:15:42.272 }' 00:15:42.272 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:42.272 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:42.272 pt2 00:15:42.272 pt3' 00:15:42.272 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.272 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:42.272 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.272 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:42.272 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.272 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.272 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.272 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.532 [2024-11-26 18:00:24.271435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1dcac61f-2b3c-4de8-87aa-d0fe4a100d30 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1dcac61f-2b3c-4de8-87aa-d0fe4a100d30 ']' 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.532 [2024-11-26 18:00:24.315118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:42.532 [2024-11-26 18:00:24.315188] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.532 [2024-11-26 18:00:24.315300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.532 [2024-11-26 18:00:24.315417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.532 [2024-11-26 18:00:24.315475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.532 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.793 [2024-11-26 18:00:24.470941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:42.793 [2024-11-26 18:00:24.473021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:42.793 [2024-11-26 18:00:24.473158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:42.793 [2024-11-26 18:00:24.473243] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:42.793 [2024-11-26 18:00:24.473349] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:42.793 [2024-11-26 18:00:24.473423] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:42.793 [2024-11-26 18:00:24.473485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:42.793 [2024-11-26 18:00:24.473544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:42.793 request: 00:15:42.793 { 00:15:42.793 "name": "raid_bdev1", 00:15:42.793 "raid_level": "raid5f", 00:15:42.793 "base_bdevs": [ 00:15:42.793 "malloc1", 00:15:42.793 "malloc2", 00:15:42.793 "malloc3" 00:15:42.793 ], 00:15:42.793 "strip_size_kb": 64, 00:15:42.793 "superblock": false, 00:15:42.793 "method": "bdev_raid_create", 00:15:42.793 "req_id": 1 00:15:42.793 } 00:15:42.793 Got JSON-RPC error response 00:15:42.793 response: 00:15:42.793 { 00:15:42.793 "code": -17, 00:15:42.793 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:42.793 } 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.793 [2024-11-26 18:00:24.542756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:42.793 [2024-11-26 18:00:24.542886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.793 [2024-11-26 18:00:24.542931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:42.793 [2024-11-26 18:00:24.542973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.793 [2024-11-26 18:00:24.545574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.793 [2024-11-26 18:00:24.545653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:42.793 [2024-11-26 18:00:24.545802] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:42.793 [2024-11-26 18:00:24.545894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:42.793 pt1 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.793 "name": "raid_bdev1", 00:15:42.793 "uuid": "1dcac61f-2b3c-4de8-87aa-d0fe4a100d30", 00:15:42.793 "strip_size_kb": 64, 00:15:42.793 "state": "configuring", 00:15:42.793 "raid_level": "raid5f", 00:15:42.793 "superblock": true, 00:15:42.793 "num_base_bdevs": 3, 00:15:42.793 "num_base_bdevs_discovered": 1, 00:15:42.793 "num_base_bdevs_operational": 3, 00:15:42.793 "base_bdevs_list": [ 00:15:42.793 { 00:15:42.793 "name": "pt1", 00:15:42.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:42.793 "is_configured": true, 00:15:42.793 "data_offset": 2048, 00:15:42.793 "data_size": 63488 00:15:42.793 }, 00:15:42.793 { 00:15:42.793 "name": null, 00:15:42.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:42.793 "is_configured": false, 00:15:42.793 "data_offset": 2048, 00:15:42.793 "data_size": 63488 00:15:42.793 }, 00:15:42.793 { 00:15:42.793 "name": null, 00:15:42.793 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:42.793 "is_configured": false, 00:15:42.793 "data_offset": 2048, 00:15:42.793 "data_size": 63488 00:15:42.793 } 00:15:42.793 ] 00:15:42.793 }' 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.793 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.364 [2024-11-26 18:00:24.981993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:43.364 [2024-11-26 18:00:24.982127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.364 [2024-11-26 18:00:24.982182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:43.364 [2024-11-26 18:00:24.982221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.364 [2024-11-26 18:00:24.982737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.364 [2024-11-26 18:00:24.982811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:43.364 [2024-11-26 18:00:24.982943] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:43.364 [2024-11-26 18:00:24.983006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:43.364 pt2 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.364 [2024-11-26 18:00:24.989967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.364 18:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.364 18:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.364 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.364 "name": "raid_bdev1", 00:15:43.364 "uuid": "1dcac61f-2b3c-4de8-87aa-d0fe4a100d30", 00:15:43.364 "strip_size_kb": 64, 00:15:43.364 "state": "configuring", 00:15:43.364 "raid_level": "raid5f", 00:15:43.364 "superblock": true, 00:15:43.364 "num_base_bdevs": 3, 00:15:43.364 "num_base_bdevs_discovered": 1, 00:15:43.364 "num_base_bdevs_operational": 3, 00:15:43.364 "base_bdevs_list": [ 00:15:43.364 { 00:15:43.364 "name": "pt1", 00:15:43.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:43.364 "is_configured": true, 00:15:43.364 "data_offset": 2048, 00:15:43.364 "data_size": 63488 00:15:43.364 }, 00:15:43.364 { 00:15:43.364 "name": null, 00:15:43.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.364 "is_configured": false, 00:15:43.364 "data_offset": 0, 00:15:43.364 "data_size": 63488 00:15:43.364 }, 00:15:43.364 { 00:15:43.364 "name": null, 00:15:43.364 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:43.364 "is_configured": false, 00:15:43.364 "data_offset": 2048, 00:15:43.364 "data_size": 63488 00:15:43.364 } 00:15:43.364 ] 00:15:43.364 }' 00:15:43.364 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.364 18:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.624 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.625 [2024-11-26 18:00:25.449350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:43.625 [2024-11-26 18:00:25.449534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.625 [2024-11-26 18:00:25.449582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:43.625 [2024-11-26 18:00:25.449653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.625 [2024-11-26 18:00:25.450204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.625 [2024-11-26 18:00:25.450271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:43.625 [2024-11-26 18:00:25.450397] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:43.625 [2024-11-26 18:00:25.450456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:43.625 pt2 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.625 [2024-11-26 18:00:25.461321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:43.625 [2024-11-26 18:00:25.461457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.625 [2024-11-26 18:00:25.461508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:43.625 [2024-11-26 18:00:25.461549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.625 [2024-11-26 18:00:25.462006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.625 [2024-11-26 18:00:25.462085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:43.625 [2024-11-26 18:00:25.462196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:43.625 [2024-11-26 18:00:25.462251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:43.625 [2024-11-26 18:00:25.462432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:43.625 [2024-11-26 18:00:25.462483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:43.625 [2024-11-26 18:00:25.462820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:43.625 [2024-11-26 18:00:25.468523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:43.625 [2024-11-26 18:00:25.468580] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:43.625 [2024-11-26 18:00:25.468823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.625 pt3 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.625 18:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.886 18:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.886 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.886 "name": "raid_bdev1", 00:15:43.886 "uuid": "1dcac61f-2b3c-4de8-87aa-d0fe4a100d30", 00:15:43.886 "strip_size_kb": 64, 00:15:43.886 "state": "online", 00:15:43.886 "raid_level": "raid5f", 00:15:43.886 "superblock": true, 00:15:43.886 "num_base_bdevs": 3, 00:15:43.886 "num_base_bdevs_discovered": 3, 00:15:43.886 "num_base_bdevs_operational": 3, 00:15:43.886 "base_bdevs_list": [ 00:15:43.886 { 00:15:43.886 "name": "pt1", 00:15:43.886 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:43.886 "is_configured": true, 00:15:43.886 "data_offset": 2048, 00:15:43.886 "data_size": 63488 00:15:43.886 }, 00:15:43.886 { 00:15:43.886 "name": "pt2", 00:15:43.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.886 "is_configured": true, 00:15:43.886 "data_offset": 2048, 00:15:43.886 "data_size": 63488 00:15:43.886 }, 00:15:43.886 { 00:15:43.886 "name": "pt3", 00:15:43.886 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:43.886 "is_configured": true, 00:15:43.886 "data_offset": 2048, 00:15:43.886 "data_size": 63488 00:15:43.886 } 00:15:43.886 ] 00:15:43.886 }' 00:15:43.886 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.886 18:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.145 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:44.145 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:44.145 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:44.145 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:44.145 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:44.146 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:44.146 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:44.146 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:44.146 18:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.146 18:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.146 [2024-11-26 18:00:25.935359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.146 18:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.146 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:44.146 "name": "raid_bdev1", 00:15:44.146 "aliases": [ 00:15:44.146 "1dcac61f-2b3c-4de8-87aa-d0fe4a100d30" 00:15:44.146 ], 00:15:44.146 "product_name": "Raid Volume", 00:15:44.146 "block_size": 512, 00:15:44.146 "num_blocks": 126976, 00:15:44.146 "uuid": "1dcac61f-2b3c-4de8-87aa-d0fe4a100d30", 00:15:44.146 "assigned_rate_limits": { 00:15:44.146 "rw_ios_per_sec": 0, 00:15:44.146 "rw_mbytes_per_sec": 0, 00:15:44.146 "r_mbytes_per_sec": 0, 00:15:44.146 "w_mbytes_per_sec": 0 00:15:44.146 }, 00:15:44.146 "claimed": false, 00:15:44.146 "zoned": false, 00:15:44.146 "supported_io_types": { 00:15:44.146 "read": true, 00:15:44.146 "write": true, 00:15:44.146 "unmap": false, 00:15:44.146 "flush": false, 00:15:44.146 "reset": true, 00:15:44.146 "nvme_admin": false, 00:15:44.146 "nvme_io": false, 00:15:44.146 "nvme_io_md": false, 00:15:44.146 "write_zeroes": true, 00:15:44.146 "zcopy": false, 00:15:44.146 "get_zone_info": false, 00:15:44.146 "zone_management": false, 00:15:44.146 "zone_append": false, 00:15:44.146 "compare": false, 00:15:44.146 "compare_and_write": false, 00:15:44.146 "abort": false, 00:15:44.146 "seek_hole": false, 00:15:44.146 "seek_data": false, 00:15:44.146 "copy": false, 00:15:44.146 "nvme_iov_md": false 00:15:44.146 }, 00:15:44.146 "driver_specific": { 00:15:44.146 "raid": { 00:15:44.146 "uuid": "1dcac61f-2b3c-4de8-87aa-d0fe4a100d30", 00:15:44.146 "strip_size_kb": 64, 00:15:44.146 "state": "online", 00:15:44.146 "raid_level": "raid5f", 00:15:44.146 "superblock": true, 00:15:44.146 "num_base_bdevs": 3, 00:15:44.146 "num_base_bdevs_discovered": 3, 00:15:44.146 "num_base_bdevs_operational": 3, 00:15:44.146 "base_bdevs_list": [ 00:15:44.146 { 00:15:44.146 "name": "pt1", 00:15:44.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.146 "is_configured": true, 00:15:44.146 "data_offset": 2048, 00:15:44.146 "data_size": 63488 00:15:44.146 }, 00:15:44.146 { 00:15:44.146 "name": "pt2", 00:15:44.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.146 "is_configured": true, 00:15:44.146 "data_offset": 2048, 00:15:44.146 "data_size": 63488 00:15:44.146 }, 00:15:44.146 { 00:15:44.146 "name": "pt3", 00:15:44.146 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.146 "is_configured": true, 00:15:44.146 "data_offset": 2048, 00:15:44.146 "data_size": 63488 00:15:44.146 } 00:15:44.146 ] 00:15:44.146 } 00:15:44.146 } 00:15:44.146 }' 00:15:44.146 18:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:44.405 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:44.405 pt2 00:15:44.405 pt3' 00:15:44.405 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.405 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:44.405 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.405 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.405 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:44.405 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.405 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.405 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.405 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.405 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.405 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.406 [2024-11-26 18:00:26.234836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.406 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1dcac61f-2b3c-4de8-87aa-d0fe4a100d30 '!=' 1dcac61f-2b3c-4de8-87aa-d0fe4a100d30 ']' 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.665 [2024-11-26 18:00:26.278580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.665 "name": "raid_bdev1", 00:15:44.665 "uuid": "1dcac61f-2b3c-4de8-87aa-d0fe4a100d30", 00:15:44.665 "strip_size_kb": 64, 00:15:44.665 "state": "online", 00:15:44.665 "raid_level": "raid5f", 00:15:44.665 "superblock": true, 00:15:44.665 "num_base_bdevs": 3, 00:15:44.665 "num_base_bdevs_discovered": 2, 00:15:44.665 "num_base_bdevs_operational": 2, 00:15:44.665 "base_bdevs_list": [ 00:15:44.665 { 00:15:44.665 "name": null, 00:15:44.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.665 "is_configured": false, 00:15:44.665 "data_offset": 0, 00:15:44.665 "data_size": 63488 00:15:44.665 }, 00:15:44.665 { 00:15:44.665 "name": "pt2", 00:15:44.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.665 "is_configured": true, 00:15:44.665 "data_offset": 2048, 00:15:44.665 "data_size": 63488 00:15:44.665 }, 00:15:44.665 { 00:15:44.665 "name": "pt3", 00:15:44.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.665 "is_configured": true, 00:15:44.665 "data_offset": 2048, 00:15:44.665 "data_size": 63488 00:15:44.665 } 00:15:44.665 ] 00:15:44.665 }' 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.665 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.926 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:44.926 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.926 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.926 [2024-11-26 18:00:26.737726] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.926 [2024-11-26 18:00:26.737807] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.926 [2024-11-26 18:00:26.737932] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.926 [2024-11-26 18:00:26.738038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.926 [2024-11-26 18:00:26.738094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:44.926 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.926 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.926 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:44.926 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.926 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.926 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.186 [2024-11-26 18:00:26.825550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:45.186 [2024-11-26 18:00:26.825675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.186 [2024-11-26 18:00:26.825716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:45.186 [2024-11-26 18:00:26.825761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.186 [2024-11-26 18:00:26.828207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.186 [2024-11-26 18:00:26.828287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:45.186 [2024-11-26 18:00:26.828405] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:45.186 [2024-11-26 18:00:26.828519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:45.186 pt2 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.186 "name": "raid_bdev1", 00:15:45.186 "uuid": "1dcac61f-2b3c-4de8-87aa-d0fe4a100d30", 00:15:45.186 "strip_size_kb": 64, 00:15:45.186 "state": "configuring", 00:15:45.186 "raid_level": "raid5f", 00:15:45.186 "superblock": true, 00:15:45.186 "num_base_bdevs": 3, 00:15:45.186 "num_base_bdevs_discovered": 1, 00:15:45.186 "num_base_bdevs_operational": 2, 00:15:45.186 "base_bdevs_list": [ 00:15:45.186 { 00:15:45.186 "name": null, 00:15:45.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.186 "is_configured": false, 00:15:45.186 "data_offset": 2048, 00:15:45.186 "data_size": 63488 00:15:45.186 }, 00:15:45.186 { 00:15:45.186 "name": "pt2", 00:15:45.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.186 "is_configured": true, 00:15:45.186 "data_offset": 2048, 00:15:45.186 "data_size": 63488 00:15:45.186 }, 00:15:45.186 { 00:15:45.186 "name": null, 00:15:45.186 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.186 "is_configured": false, 00:15:45.186 "data_offset": 2048, 00:15:45.186 "data_size": 63488 00:15:45.186 } 00:15:45.186 ] 00:15:45.186 }' 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.186 18:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.447 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:45.447 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:45.447 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:45.447 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:45.447 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.447 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.447 [2024-11-26 18:00:27.304963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:45.447 [2024-11-26 18:00:27.305158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.447 [2024-11-26 18:00:27.305209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:45.447 [2024-11-26 18:00:27.305270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.447 [2024-11-26 18:00:27.305920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.447 [2024-11-26 18:00:27.305998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:45.447 [2024-11-26 18:00:27.306147] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:45.447 [2024-11-26 18:00:27.306215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:45.447 [2024-11-26 18:00:27.306400] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:45.447 [2024-11-26 18:00:27.306449] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:45.447 [2024-11-26 18:00:27.306803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:45.708 pt3 00:15:45.708 [2024-11-26 18:00:27.313441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:45.708 [2024-11-26 18:00:27.313467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:45.708 [2024-11-26 18:00:27.313927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.708 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.708 "name": "raid_bdev1", 00:15:45.708 "uuid": "1dcac61f-2b3c-4de8-87aa-d0fe4a100d30", 00:15:45.708 "strip_size_kb": 64, 00:15:45.708 "state": "online", 00:15:45.708 "raid_level": "raid5f", 00:15:45.708 "superblock": true, 00:15:45.708 "num_base_bdevs": 3, 00:15:45.708 "num_base_bdevs_discovered": 2, 00:15:45.708 "num_base_bdevs_operational": 2, 00:15:45.708 "base_bdevs_list": [ 00:15:45.708 { 00:15:45.708 "name": null, 00:15:45.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.709 "is_configured": false, 00:15:45.709 "data_offset": 2048, 00:15:45.709 "data_size": 63488 00:15:45.709 }, 00:15:45.709 { 00:15:45.709 "name": "pt2", 00:15:45.709 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.709 "is_configured": true, 00:15:45.709 "data_offset": 2048, 00:15:45.709 "data_size": 63488 00:15:45.709 }, 00:15:45.709 { 00:15:45.709 "name": "pt3", 00:15:45.709 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.709 "is_configured": true, 00:15:45.709 "data_offset": 2048, 00:15:45.709 "data_size": 63488 00:15:45.709 } 00:15:45.709 ] 00:15:45.709 }' 00:15:45.709 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.709 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.969 [2024-11-26 18:00:27.770051] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:45.969 [2024-11-26 18:00:27.770087] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:45.969 [2024-11-26 18:00:27.770187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.969 [2024-11-26 18:00:27.770265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:45.969 [2024-11-26 18:00:27.770282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.969 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.230 [2024-11-26 18:00:27.841956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:46.230 [2024-11-26 18:00:27.842055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.230 [2024-11-26 18:00:27.842083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:46.230 [2024-11-26 18:00:27.842095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.230 [2024-11-26 18:00:27.845046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.230 [2024-11-26 18:00:27.845088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:46.230 [2024-11-26 18:00:27.845197] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:46.230 [2024-11-26 18:00:27.845252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:46.230 [2024-11-26 18:00:27.845463] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:46.230 [2024-11-26 18:00:27.845489] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.230 [2024-11-26 18:00:27.845511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:46.230 [2024-11-26 18:00:27.845590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:46.230 pt1 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.230 "name": "raid_bdev1", 00:15:46.230 "uuid": "1dcac61f-2b3c-4de8-87aa-d0fe4a100d30", 00:15:46.230 "strip_size_kb": 64, 00:15:46.230 "state": "configuring", 00:15:46.230 "raid_level": "raid5f", 00:15:46.230 "superblock": true, 00:15:46.230 "num_base_bdevs": 3, 00:15:46.230 "num_base_bdevs_discovered": 1, 00:15:46.230 "num_base_bdevs_operational": 2, 00:15:46.230 "base_bdevs_list": [ 00:15:46.230 { 00:15:46.230 "name": null, 00:15:46.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.230 "is_configured": false, 00:15:46.230 "data_offset": 2048, 00:15:46.230 "data_size": 63488 00:15:46.230 }, 00:15:46.230 { 00:15:46.230 "name": "pt2", 00:15:46.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.230 "is_configured": true, 00:15:46.230 "data_offset": 2048, 00:15:46.230 "data_size": 63488 00:15:46.230 }, 00:15:46.230 { 00:15:46.230 "name": null, 00:15:46.230 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.230 "is_configured": false, 00:15:46.230 "data_offset": 2048, 00:15:46.230 "data_size": 63488 00:15:46.230 } 00:15:46.230 ] 00:15:46.230 }' 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.230 18:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.489 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:46.489 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.489 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.489 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:46.489 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.490 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:46.490 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:46.490 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.490 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.490 [2024-11-26 18:00:28.341618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:46.490 [2024-11-26 18:00:28.341701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.490 [2024-11-26 18:00:28.341728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:46.490 [2024-11-26 18:00:28.341740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.490 [2024-11-26 18:00:28.342356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.490 [2024-11-26 18:00:28.342390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:46.490 [2024-11-26 18:00:28.342500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:46.490 [2024-11-26 18:00:28.342529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:46.490 [2024-11-26 18:00:28.342705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:46.490 [2024-11-26 18:00:28.342726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:46.490 [2024-11-26 18:00:28.343075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:46.490 [2024-11-26 18:00:28.350941] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:46.490 [2024-11-26 18:00:28.350994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:46.490 [2024-11-26 18:00:28.351311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.490 pt3 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.749 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.750 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.750 "name": "raid_bdev1", 00:15:46.750 "uuid": "1dcac61f-2b3c-4de8-87aa-d0fe4a100d30", 00:15:46.750 "strip_size_kb": 64, 00:15:46.750 "state": "online", 00:15:46.750 "raid_level": "raid5f", 00:15:46.750 "superblock": true, 00:15:46.750 "num_base_bdevs": 3, 00:15:46.750 "num_base_bdevs_discovered": 2, 00:15:46.750 "num_base_bdevs_operational": 2, 00:15:46.750 "base_bdevs_list": [ 00:15:46.750 { 00:15:46.750 "name": null, 00:15:46.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.750 "is_configured": false, 00:15:46.750 "data_offset": 2048, 00:15:46.750 "data_size": 63488 00:15:46.750 }, 00:15:46.750 { 00:15:46.750 "name": "pt2", 00:15:46.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.750 "is_configured": true, 00:15:46.750 "data_offset": 2048, 00:15:46.750 "data_size": 63488 00:15:46.750 }, 00:15:46.750 { 00:15:46.750 "name": "pt3", 00:15:46.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.750 "is_configured": true, 00:15:46.750 "data_offset": 2048, 00:15:46.750 "data_size": 63488 00:15:46.750 } 00:15:46.750 ] 00:15:46.750 }' 00:15:46.750 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.750 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.009 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:47.009 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:47.009 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.009 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.010 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.010 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:47.010 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:47.010 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.010 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.010 [2024-11-26 18:00:28.851229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.010 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:47.010 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.269 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1dcac61f-2b3c-4de8-87aa-d0fe4a100d30 '!=' 1dcac61f-2b3c-4de8-87aa-d0fe4a100d30 ']' 00:15:47.269 18:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81523 00:15:47.269 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81523 ']' 00:15:47.269 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81523 00:15:47.269 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:47.269 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.269 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81523 00:15:47.269 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.269 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.269 killing process with pid 81523 00:15:47.269 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81523' 00:15:47.269 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81523 00:15:47.269 [2024-11-26 18:00:28.944169] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.269 [2024-11-26 18:00:28.944300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.269 18:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81523 00:15:47.269 [2024-11-26 18:00:28.944387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.269 [2024-11-26 18:00:28.944402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:47.527 [2024-11-26 18:00:29.256320] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.907 18:00:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:48.907 00:15:48.907 real 0m8.260s 00:15:48.907 user 0m12.855s 00:15:48.907 sys 0m1.496s 00:15:48.907 18:00:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.907 18:00:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.907 ************************************ 00:15:48.907 END TEST raid5f_superblock_test 00:15:48.907 ************************************ 00:15:48.907 18:00:30 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:48.907 18:00:30 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:48.907 18:00:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:48.907 18:00:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.907 18:00:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:48.907 ************************************ 00:15:48.907 START TEST raid5f_rebuild_test 00:15:48.907 ************************************ 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81969 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81969 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81969 ']' 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.907 18:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.907 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:48.907 Zero copy mechanism will not be used. 00:15:48.907 [2024-11-26 18:00:30.663074] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:15:48.907 [2024-11-26 18:00:30.663198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81969 ] 00:15:49.212 [2024-11-26 18:00:30.838662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.212 [2024-11-26 18:00:30.966196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.472 [2024-11-26 18:00:31.182744] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.472 [2024-11-26 18:00:31.182819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.732 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.732 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:49.732 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:49.732 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:49.732 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.732 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.732 BaseBdev1_malloc 00:15:49.732 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.732 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:49.732 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.732 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.993 [2024-11-26 18:00:31.595468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:49.993 [2024-11-26 18:00:31.595542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.993 [2024-11-26 18:00:31.595569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:49.993 [2024-11-26 18:00:31.595581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.993 [2024-11-26 18:00:31.598100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.993 [2024-11-26 18:00:31.598143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:49.993 BaseBdev1 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.993 BaseBdev2_malloc 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.993 [2024-11-26 18:00:31.648602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:49.993 [2024-11-26 18:00:31.648675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.993 [2024-11-26 18:00:31.648729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:49.993 [2024-11-26 18:00:31.648741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.993 [2024-11-26 18:00:31.651073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.993 [2024-11-26 18:00:31.651110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:49.993 BaseBdev2 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.993 BaseBdev3_malloc 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.993 [2024-11-26 18:00:31.718449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:49.993 [2024-11-26 18:00:31.718522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.993 [2024-11-26 18:00:31.718549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:49.993 [2024-11-26 18:00:31.718562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.993 [2024-11-26 18:00:31.720910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.993 [2024-11-26 18:00:31.720954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:49.993 BaseBdev3 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.993 spare_malloc 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.993 spare_delay 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.993 [2024-11-26 18:00:31.788895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:49.993 [2024-11-26 18:00:31.788978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.993 [2024-11-26 18:00:31.789000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:49.993 [2024-11-26 18:00:31.789013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.993 [2024-11-26 18:00:31.791531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.993 [2024-11-26 18:00:31.791576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:49.993 spare 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.993 [2024-11-26 18:00:31.800970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.993 [2024-11-26 18:00:31.803121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.993 [2024-11-26 18:00:31.803225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:49.993 [2024-11-26 18:00:31.803329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:49.993 [2024-11-26 18:00:31.803342] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:49.993 [2024-11-26 18:00:31.803658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:49.993 [2024-11-26 18:00:31.809821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:49.993 [2024-11-26 18:00:31.809849] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:49.993 [2024-11-26 18:00:31.810125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.993 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.253 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.253 "name": "raid_bdev1", 00:15:50.253 "uuid": "edb95c45-0811-4d1c-ae8a-36465d4eaea6", 00:15:50.253 "strip_size_kb": 64, 00:15:50.253 "state": "online", 00:15:50.253 "raid_level": "raid5f", 00:15:50.253 "superblock": false, 00:15:50.253 "num_base_bdevs": 3, 00:15:50.253 "num_base_bdevs_discovered": 3, 00:15:50.253 "num_base_bdevs_operational": 3, 00:15:50.253 "base_bdevs_list": [ 00:15:50.253 { 00:15:50.253 "name": "BaseBdev1", 00:15:50.253 "uuid": "f6daeee0-9974-52d9-b267-2aa73efe5715", 00:15:50.253 "is_configured": true, 00:15:50.253 "data_offset": 0, 00:15:50.253 "data_size": 65536 00:15:50.253 }, 00:15:50.253 { 00:15:50.253 "name": "BaseBdev2", 00:15:50.253 "uuid": "4f3b57b0-43eb-5646-a758-359c517b12fa", 00:15:50.253 "is_configured": true, 00:15:50.253 "data_offset": 0, 00:15:50.253 "data_size": 65536 00:15:50.253 }, 00:15:50.253 { 00:15:50.253 "name": "BaseBdev3", 00:15:50.253 "uuid": "c4ac7292-07de-5a6f-86a8-948c8a52ac09", 00:15:50.253 "is_configured": true, 00:15:50.253 "data_offset": 0, 00:15:50.253 "data_size": 65536 00:15:50.253 } 00:15:50.253 ] 00:15:50.253 }' 00:15:50.253 18:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.253 18:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.513 [2024-11-26 18:00:32.220971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:50.513 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:50.772 [2024-11-26 18:00:32.520282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:50.772 /dev/nbd0 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.772 1+0 records in 00:15:50.772 1+0 records out 00:15:50.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250414 s, 16.4 MB/s 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:50.772 18:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:51.339 512+0 records in 00:15:51.339 512+0 records out 00:15:51.339 67108864 bytes (67 MB, 64 MiB) copied, 0.418632 s, 160 MB/s 00:15:51.339 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:51.339 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.339 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:51.339 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.339 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:51.339 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.339 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.598 [2024-11-26 18:00:33.250426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.598 [2024-11-26 18:00:33.267174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.598 "name": "raid_bdev1", 00:15:51.598 "uuid": "edb95c45-0811-4d1c-ae8a-36465d4eaea6", 00:15:51.598 "strip_size_kb": 64, 00:15:51.598 "state": "online", 00:15:51.598 "raid_level": "raid5f", 00:15:51.598 "superblock": false, 00:15:51.598 "num_base_bdevs": 3, 00:15:51.598 "num_base_bdevs_discovered": 2, 00:15:51.598 "num_base_bdevs_operational": 2, 00:15:51.598 "base_bdevs_list": [ 00:15:51.598 { 00:15:51.598 "name": null, 00:15:51.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.598 "is_configured": false, 00:15:51.598 "data_offset": 0, 00:15:51.598 "data_size": 65536 00:15:51.598 }, 00:15:51.598 { 00:15:51.598 "name": "BaseBdev2", 00:15:51.598 "uuid": "4f3b57b0-43eb-5646-a758-359c517b12fa", 00:15:51.598 "is_configured": true, 00:15:51.598 "data_offset": 0, 00:15:51.598 "data_size": 65536 00:15:51.598 }, 00:15:51.598 { 00:15:51.598 "name": "BaseBdev3", 00:15:51.598 "uuid": "c4ac7292-07de-5a6f-86a8-948c8a52ac09", 00:15:51.598 "is_configured": true, 00:15:51.598 "data_offset": 0, 00:15:51.598 "data_size": 65536 00:15:51.598 } 00:15:51.598 ] 00:15:51.598 }' 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.598 18:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.164 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:52.164 18:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.164 18:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.164 [2024-11-26 18:00:33.754372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:52.164 [2024-11-26 18:00:33.774869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:52.164 18:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.164 18:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:52.164 [2024-11-26 18:00:33.783964] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.099 "name": "raid_bdev1", 00:15:53.099 "uuid": "edb95c45-0811-4d1c-ae8a-36465d4eaea6", 00:15:53.099 "strip_size_kb": 64, 00:15:53.099 "state": "online", 00:15:53.099 "raid_level": "raid5f", 00:15:53.099 "superblock": false, 00:15:53.099 "num_base_bdevs": 3, 00:15:53.099 "num_base_bdevs_discovered": 3, 00:15:53.099 "num_base_bdevs_operational": 3, 00:15:53.099 "process": { 00:15:53.099 "type": "rebuild", 00:15:53.099 "target": "spare", 00:15:53.099 "progress": { 00:15:53.099 "blocks": 20480, 00:15:53.099 "percent": 15 00:15:53.099 } 00:15:53.099 }, 00:15:53.099 "base_bdevs_list": [ 00:15:53.099 { 00:15:53.099 "name": "spare", 00:15:53.099 "uuid": "19d382ef-5ffc-5bf1-b954-942cef5626ef", 00:15:53.099 "is_configured": true, 00:15:53.099 "data_offset": 0, 00:15:53.099 "data_size": 65536 00:15:53.099 }, 00:15:53.099 { 00:15:53.099 "name": "BaseBdev2", 00:15:53.099 "uuid": "4f3b57b0-43eb-5646-a758-359c517b12fa", 00:15:53.099 "is_configured": true, 00:15:53.099 "data_offset": 0, 00:15:53.099 "data_size": 65536 00:15:53.099 }, 00:15:53.099 { 00:15:53.099 "name": "BaseBdev3", 00:15:53.099 "uuid": "c4ac7292-07de-5a6f-86a8-948c8a52ac09", 00:15:53.099 "is_configured": true, 00:15:53.099 "data_offset": 0, 00:15:53.099 "data_size": 65536 00:15:53.099 } 00:15:53.099 ] 00:15:53.099 }' 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.099 18:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.099 [2024-11-26 18:00:34.916122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.358 [2024-11-26 18:00:34.996281] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:53.358 [2024-11-26 18:00:34.996364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.358 [2024-11-26 18:00:34.996389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.358 [2024-11-26 18:00:34.996399] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.358 "name": "raid_bdev1", 00:15:53.358 "uuid": "edb95c45-0811-4d1c-ae8a-36465d4eaea6", 00:15:53.358 "strip_size_kb": 64, 00:15:53.358 "state": "online", 00:15:53.358 "raid_level": "raid5f", 00:15:53.358 "superblock": false, 00:15:53.358 "num_base_bdevs": 3, 00:15:53.358 "num_base_bdevs_discovered": 2, 00:15:53.358 "num_base_bdevs_operational": 2, 00:15:53.358 "base_bdevs_list": [ 00:15:53.358 { 00:15:53.358 "name": null, 00:15:53.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.358 "is_configured": false, 00:15:53.358 "data_offset": 0, 00:15:53.358 "data_size": 65536 00:15:53.358 }, 00:15:53.358 { 00:15:53.358 "name": "BaseBdev2", 00:15:53.358 "uuid": "4f3b57b0-43eb-5646-a758-359c517b12fa", 00:15:53.358 "is_configured": true, 00:15:53.358 "data_offset": 0, 00:15:53.358 "data_size": 65536 00:15:53.358 }, 00:15:53.358 { 00:15:53.358 "name": "BaseBdev3", 00:15:53.358 "uuid": "c4ac7292-07de-5a6f-86a8-948c8a52ac09", 00:15:53.358 "is_configured": true, 00:15:53.358 "data_offset": 0, 00:15:53.358 "data_size": 65536 00:15:53.358 } 00:15:53.358 ] 00:15:53.358 }' 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.358 18:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.924 "name": "raid_bdev1", 00:15:53.924 "uuid": "edb95c45-0811-4d1c-ae8a-36465d4eaea6", 00:15:53.924 "strip_size_kb": 64, 00:15:53.924 "state": "online", 00:15:53.924 "raid_level": "raid5f", 00:15:53.924 "superblock": false, 00:15:53.924 "num_base_bdevs": 3, 00:15:53.924 "num_base_bdevs_discovered": 2, 00:15:53.924 "num_base_bdevs_operational": 2, 00:15:53.924 "base_bdevs_list": [ 00:15:53.924 { 00:15:53.924 "name": null, 00:15:53.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.924 "is_configured": false, 00:15:53.924 "data_offset": 0, 00:15:53.924 "data_size": 65536 00:15:53.924 }, 00:15:53.924 { 00:15:53.924 "name": "BaseBdev2", 00:15:53.924 "uuid": "4f3b57b0-43eb-5646-a758-359c517b12fa", 00:15:53.924 "is_configured": true, 00:15:53.924 "data_offset": 0, 00:15:53.924 "data_size": 65536 00:15:53.924 }, 00:15:53.924 { 00:15:53.924 "name": "BaseBdev3", 00:15:53.924 "uuid": "c4ac7292-07de-5a6f-86a8-948c8a52ac09", 00:15:53.924 "is_configured": true, 00:15:53.924 "data_offset": 0, 00:15:53.924 "data_size": 65536 00:15:53.924 } 00:15:53.924 ] 00:15:53.924 }' 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.924 [2024-11-26 18:00:35.656671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.924 [2024-11-26 18:00:35.675784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.924 18:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:53.924 [2024-11-26 18:00:35.684584] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:54.859 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.859 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.859 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.859 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.859 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.859 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.859 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.859 18:00:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.859 18:00:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.859 18:00:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.119 "name": "raid_bdev1", 00:15:55.119 "uuid": "edb95c45-0811-4d1c-ae8a-36465d4eaea6", 00:15:55.119 "strip_size_kb": 64, 00:15:55.119 "state": "online", 00:15:55.119 "raid_level": "raid5f", 00:15:55.119 "superblock": false, 00:15:55.119 "num_base_bdevs": 3, 00:15:55.119 "num_base_bdevs_discovered": 3, 00:15:55.119 "num_base_bdevs_operational": 3, 00:15:55.119 "process": { 00:15:55.119 "type": "rebuild", 00:15:55.119 "target": "spare", 00:15:55.119 "progress": { 00:15:55.119 "blocks": 20480, 00:15:55.119 "percent": 15 00:15:55.119 } 00:15:55.119 }, 00:15:55.119 "base_bdevs_list": [ 00:15:55.119 { 00:15:55.119 "name": "spare", 00:15:55.119 "uuid": "19d382ef-5ffc-5bf1-b954-942cef5626ef", 00:15:55.119 "is_configured": true, 00:15:55.119 "data_offset": 0, 00:15:55.119 "data_size": 65536 00:15:55.119 }, 00:15:55.119 { 00:15:55.119 "name": "BaseBdev2", 00:15:55.119 "uuid": "4f3b57b0-43eb-5646-a758-359c517b12fa", 00:15:55.119 "is_configured": true, 00:15:55.119 "data_offset": 0, 00:15:55.119 "data_size": 65536 00:15:55.119 }, 00:15:55.119 { 00:15:55.119 "name": "BaseBdev3", 00:15:55.119 "uuid": "c4ac7292-07de-5a6f-86a8-948c8a52ac09", 00:15:55.119 "is_configured": true, 00:15:55.119 "data_offset": 0, 00:15:55.119 "data_size": 65536 00:15:55.119 } 00:15:55.119 ] 00:15:55.119 }' 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=577 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.119 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.119 "name": "raid_bdev1", 00:15:55.119 "uuid": "edb95c45-0811-4d1c-ae8a-36465d4eaea6", 00:15:55.119 "strip_size_kb": 64, 00:15:55.119 "state": "online", 00:15:55.119 "raid_level": "raid5f", 00:15:55.119 "superblock": false, 00:15:55.119 "num_base_bdevs": 3, 00:15:55.119 "num_base_bdevs_discovered": 3, 00:15:55.119 "num_base_bdevs_operational": 3, 00:15:55.119 "process": { 00:15:55.119 "type": "rebuild", 00:15:55.119 "target": "spare", 00:15:55.119 "progress": { 00:15:55.119 "blocks": 22528, 00:15:55.119 "percent": 17 00:15:55.119 } 00:15:55.119 }, 00:15:55.119 "base_bdevs_list": [ 00:15:55.119 { 00:15:55.119 "name": "spare", 00:15:55.119 "uuid": "19d382ef-5ffc-5bf1-b954-942cef5626ef", 00:15:55.119 "is_configured": true, 00:15:55.119 "data_offset": 0, 00:15:55.119 "data_size": 65536 00:15:55.119 }, 00:15:55.119 { 00:15:55.119 "name": "BaseBdev2", 00:15:55.119 "uuid": "4f3b57b0-43eb-5646-a758-359c517b12fa", 00:15:55.119 "is_configured": true, 00:15:55.119 "data_offset": 0, 00:15:55.119 "data_size": 65536 00:15:55.119 }, 00:15:55.119 { 00:15:55.120 "name": "BaseBdev3", 00:15:55.120 "uuid": "c4ac7292-07de-5a6f-86a8-948c8a52ac09", 00:15:55.120 "is_configured": true, 00:15:55.120 "data_offset": 0, 00:15:55.120 "data_size": 65536 00:15:55.120 } 00:15:55.120 ] 00:15:55.120 }' 00:15:55.120 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.120 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.120 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.120 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.120 18:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:56.495 18:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:56.495 18:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.495 18:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.495 18:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.495 18:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.495 18:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.495 18:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.495 18:00:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.495 18:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.495 18:00:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.495 18:00:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.495 18:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.495 "name": "raid_bdev1", 00:15:56.495 "uuid": "edb95c45-0811-4d1c-ae8a-36465d4eaea6", 00:15:56.495 "strip_size_kb": 64, 00:15:56.495 "state": "online", 00:15:56.495 "raid_level": "raid5f", 00:15:56.495 "superblock": false, 00:15:56.495 "num_base_bdevs": 3, 00:15:56.495 "num_base_bdevs_discovered": 3, 00:15:56.495 "num_base_bdevs_operational": 3, 00:15:56.495 "process": { 00:15:56.495 "type": "rebuild", 00:15:56.495 "target": "spare", 00:15:56.495 "progress": { 00:15:56.495 "blocks": 45056, 00:15:56.495 "percent": 34 00:15:56.495 } 00:15:56.495 }, 00:15:56.495 "base_bdevs_list": [ 00:15:56.495 { 00:15:56.495 "name": "spare", 00:15:56.495 "uuid": "19d382ef-5ffc-5bf1-b954-942cef5626ef", 00:15:56.495 "is_configured": true, 00:15:56.495 "data_offset": 0, 00:15:56.495 "data_size": 65536 00:15:56.495 }, 00:15:56.495 { 00:15:56.495 "name": "BaseBdev2", 00:15:56.495 "uuid": "4f3b57b0-43eb-5646-a758-359c517b12fa", 00:15:56.495 "is_configured": true, 00:15:56.495 "data_offset": 0, 00:15:56.495 "data_size": 65536 00:15:56.495 }, 00:15:56.495 { 00:15:56.495 "name": "BaseBdev3", 00:15:56.495 "uuid": "c4ac7292-07de-5a6f-86a8-948c8a52ac09", 00:15:56.495 "is_configured": true, 00:15:56.495 "data_offset": 0, 00:15:56.495 "data_size": 65536 00:15:56.495 } 00:15:56.495 ] 00:15:56.495 }' 00:15:56.495 18:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.495 18:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.495 18:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.495 18:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.495 18:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.430 "name": "raid_bdev1", 00:15:57.430 "uuid": "edb95c45-0811-4d1c-ae8a-36465d4eaea6", 00:15:57.430 "strip_size_kb": 64, 00:15:57.430 "state": "online", 00:15:57.430 "raid_level": "raid5f", 00:15:57.430 "superblock": false, 00:15:57.430 "num_base_bdevs": 3, 00:15:57.430 "num_base_bdevs_discovered": 3, 00:15:57.430 "num_base_bdevs_operational": 3, 00:15:57.430 "process": { 00:15:57.430 "type": "rebuild", 00:15:57.430 "target": "spare", 00:15:57.430 "progress": { 00:15:57.430 "blocks": 67584, 00:15:57.430 "percent": 51 00:15:57.430 } 00:15:57.430 }, 00:15:57.430 "base_bdevs_list": [ 00:15:57.430 { 00:15:57.430 "name": "spare", 00:15:57.430 "uuid": "19d382ef-5ffc-5bf1-b954-942cef5626ef", 00:15:57.430 "is_configured": true, 00:15:57.430 "data_offset": 0, 00:15:57.430 "data_size": 65536 00:15:57.430 }, 00:15:57.430 { 00:15:57.430 "name": "BaseBdev2", 00:15:57.430 "uuid": "4f3b57b0-43eb-5646-a758-359c517b12fa", 00:15:57.430 "is_configured": true, 00:15:57.430 "data_offset": 0, 00:15:57.430 "data_size": 65536 00:15:57.430 }, 00:15:57.430 { 00:15:57.430 "name": "BaseBdev3", 00:15:57.430 "uuid": "c4ac7292-07de-5a6f-86a8-948c8a52ac09", 00:15:57.430 "is_configured": true, 00:15:57.430 "data_offset": 0, 00:15:57.430 "data_size": 65536 00:15:57.430 } 00:15:57.430 ] 00:15:57.430 }' 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.430 18:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:58.368 18:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.368 18:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.368 18:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.368 18:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.368 18:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.368 18:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.368 18:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.368 18:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.368 18:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.368 18:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.626 18:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.626 18:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.626 "name": "raid_bdev1", 00:15:58.626 "uuid": "edb95c45-0811-4d1c-ae8a-36465d4eaea6", 00:15:58.626 "strip_size_kb": 64, 00:15:58.626 "state": "online", 00:15:58.626 "raid_level": "raid5f", 00:15:58.626 "superblock": false, 00:15:58.626 "num_base_bdevs": 3, 00:15:58.626 "num_base_bdevs_discovered": 3, 00:15:58.626 "num_base_bdevs_operational": 3, 00:15:58.626 "process": { 00:15:58.626 "type": "rebuild", 00:15:58.626 "target": "spare", 00:15:58.626 "progress": { 00:15:58.626 "blocks": 90112, 00:15:58.626 "percent": 68 00:15:58.626 } 00:15:58.626 }, 00:15:58.626 "base_bdevs_list": [ 00:15:58.626 { 00:15:58.626 "name": "spare", 00:15:58.626 "uuid": "19d382ef-5ffc-5bf1-b954-942cef5626ef", 00:15:58.626 "is_configured": true, 00:15:58.626 "data_offset": 0, 00:15:58.626 "data_size": 65536 00:15:58.626 }, 00:15:58.626 { 00:15:58.626 "name": "BaseBdev2", 00:15:58.626 "uuid": "4f3b57b0-43eb-5646-a758-359c517b12fa", 00:15:58.626 "is_configured": true, 00:15:58.626 "data_offset": 0, 00:15:58.626 "data_size": 65536 00:15:58.626 }, 00:15:58.626 { 00:15:58.626 "name": "BaseBdev3", 00:15:58.626 "uuid": "c4ac7292-07de-5a6f-86a8-948c8a52ac09", 00:15:58.627 "is_configured": true, 00:15:58.627 "data_offset": 0, 00:15:58.627 "data_size": 65536 00:15:58.627 } 00:15:58.627 ] 00:15:58.627 }' 00:15:58.627 18:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.627 18:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.627 18:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.627 18:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.627 18:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:59.566 18:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.566 18:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.566 18:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.566 18:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.566 18:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.566 18:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.566 18:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.566 18:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.566 18:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.566 18:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.566 18:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.566 18:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.566 "name": "raid_bdev1", 00:15:59.566 "uuid": "edb95c45-0811-4d1c-ae8a-36465d4eaea6", 00:15:59.566 "strip_size_kb": 64, 00:15:59.566 "state": "online", 00:15:59.566 "raid_level": "raid5f", 00:15:59.566 "superblock": false, 00:15:59.566 "num_base_bdevs": 3, 00:15:59.566 "num_base_bdevs_discovered": 3, 00:15:59.566 "num_base_bdevs_operational": 3, 00:15:59.566 "process": { 00:15:59.566 "type": "rebuild", 00:15:59.566 "target": "spare", 00:15:59.566 "progress": { 00:15:59.566 "blocks": 114688, 00:15:59.566 "percent": 87 00:15:59.566 } 00:15:59.566 }, 00:15:59.566 "base_bdevs_list": [ 00:15:59.566 { 00:15:59.566 "name": "spare", 00:15:59.566 "uuid": "19d382ef-5ffc-5bf1-b954-942cef5626ef", 00:15:59.566 "is_configured": true, 00:15:59.566 "data_offset": 0, 00:15:59.566 "data_size": 65536 00:15:59.566 }, 00:15:59.566 { 00:15:59.566 "name": "BaseBdev2", 00:15:59.566 "uuid": "4f3b57b0-43eb-5646-a758-359c517b12fa", 00:15:59.566 "is_configured": true, 00:15:59.566 "data_offset": 0, 00:15:59.566 "data_size": 65536 00:15:59.566 }, 00:15:59.566 { 00:15:59.566 "name": "BaseBdev3", 00:15:59.566 "uuid": "c4ac7292-07de-5a6f-86a8-948c8a52ac09", 00:15:59.566 "is_configured": true, 00:15:59.566 "data_offset": 0, 00:15:59.566 "data_size": 65536 00:15:59.566 } 00:15:59.566 ] 00:15:59.566 }' 00:15:59.566 18:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.825 18:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.825 18:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.825 18:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.825 18:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.394 [2024-11-26 18:00:42.149851] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:00.394 [2024-11-26 18:00:42.149958] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:00.394 [2024-11-26 18:00:42.150010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.654 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.654 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.654 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.654 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.654 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.654 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.654 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.654 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.654 18:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.654 18:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.914 "name": "raid_bdev1", 00:16:00.914 "uuid": "edb95c45-0811-4d1c-ae8a-36465d4eaea6", 00:16:00.914 "strip_size_kb": 64, 00:16:00.914 "state": "online", 00:16:00.914 "raid_level": "raid5f", 00:16:00.914 "superblock": false, 00:16:00.914 "num_base_bdevs": 3, 00:16:00.914 "num_base_bdevs_discovered": 3, 00:16:00.914 "num_base_bdevs_operational": 3, 00:16:00.914 "base_bdevs_list": [ 00:16:00.914 { 00:16:00.914 "name": "spare", 00:16:00.914 "uuid": "19d382ef-5ffc-5bf1-b954-942cef5626ef", 00:16:00.914 "is_configured": true, 00:16:00.914 "data_offset": 0, 00:16:00.914 "data_size": 65536 00:16:00.914 }, 00:16:00.914 { 00:16:00.914 "name": "BaseBdev2", 00:16:00.914 "uuid": "4f3b57b0-43eb-5646-a758-359c517b12fa", 00:16:00.914 "is_configured": true, 00:16:00.914 "data_offset": 0, 00:16:00.914 "data_size": 65536 00:16:00.914 }, 00:16:00.914 { 00:16:00.914 "name": "BaseBdev3", 00:16:00.914 "uuid": "c4ac7292-07de-5a6f-86a8-948c8a52ac09", 00:16:00.914 "is_configured": true, 00:16:00.914 "data_offset": 0, 00:16:00.914 "data_size": 65536 00:16:00.914 } 00:16:00.914 ] 00:16:00.914 }' 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.914 "name": "raid_bdev1", 00:16:00.914 "uuid": "edb95c45-0811-4d1c-ae8a-36465d4eaea6", 00:16:00.914 "strip_size_kb": 64, 00:16:00.914 "state": "online", 00:16:00.914 "raid_level": "raid5f", 00:16:00.914 "superblock": false, 00:16:00.914 "num_base_bdevs": 3, 00:16:00.914 "num_base_bdevs_discovered": 3, 00:16:00.914 "num_base_bdevs_operational": 3, 00:16:00.914 "base_bdevs_list": [ 00:16:00.914 { 00:16:00.914 "name": "spare", 00:16:00.914 "uuid": "19d382ef-5ffc-5bf1-b954-942cef5626ef", 00:16:00.914 "is_configured": true, 00:16:00.914 "data_offset": 0, 00:16:00.914 "data_size": 65536 00:16:00.914 }, 00:16:00.914 { 00:16:00.914 "name": "BaseBdev2", 00:16:00.914 "uuid": "4f3b57b0-43eb-5646-a758-359c517b12fa", 00:16:00.914 "is_configured": true, 00:16:00.914 "data_offset": 0, 00:16:00.914 "data_size": 65536 00:16:00.914 }, 00:16:00.914 { 00:16:00.914 "name": "BaseBdev3", 00:16:00.914 "uuid": "c4ac7292-07de-5a6f-86a8-948c8a52ac09", 00:16:00.914 "is_configured": true, 00:16:00.914 "data_offset": 0, 00:16:00.914 "data_size": 65536 00:16:00.914 } 00:16:00.914 ] 00:16:00.914 }' 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:00.914 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.232 "name": "raid_bdev1", 00:16:01.232 "uuid": "edb95c45-0811-4d1c-ae8a-36465d4eaea6", 00:16:01.232 "strip_size_kb": 64, 00:16:01.232 "state": "online", 00:16:01.232 "raid_level": "raid5f", 00:16:01.232 "superblock": false, 00:16:01.232 "num_base_bdevs": 3, 00:16:01.232 "num_base_bdevs_discovered": 3, 00:16:01.232 "num_base_bdevs_operational": 3, 00:16:01.232 "base_bdevs_list": [ 00:16:01.232 { 00:16:01.232 "name": "spare", 00:16:01.232 "uuid": "19d382ef-5ffc-5bf1-b954-942cef5626ef", 00:16:01.232 "is_configured": true, 00:16:01.232 "data_offset": 0, 00:16:01.232 "data_size": 65536 00:16:01.232 }, 00:16:01.232 { 00:16:01.232 "name": "BaseBdev2", 00:16:01.232 "uuid": "4f3b57b0-43eb-5646-a758-359c517b12fa", 00:16:01.232 "is_configured": true, 00:16:01.232 "data_offset": 0, 00:16:01.232 "data_size": 65536 00:16:01.232 }, 00:16:01.232 { 00:16:01.232 "name": "BaseBdev3", 00:16:01.232 "uuid": "c4ac7292-07de-5a6f-86a8-948c8a52ac09", 00:16:01.232 "is_configured": true, 00:16:01.232 "data_offset": 0, 00:16:01.232 "data_size": 65536 00:16:01.232 } 00:16:01.232 ] 00:16:01.232 }' 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.232 18:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.521 [2024-11-26 18:00:43.232664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.521 [2024-11-26 18:00:43.232709] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.521 [2024-11-26 18:00:43.232837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.521 [2024-11-26 18:00:43.232941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.521 [2024-11-26 18:00:43.232961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:01.521 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:01.780 /dev/nbd0 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.780 1+0 records in 00:16:01.780 1+0 records out 00:16:01.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325853 s, 12.6 MB/s 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:01.780 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:02.040 /dev/nbd1 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.040 1+0 records in 00:16:02.040 1+0 records out 00:16:02.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474069 s, 8.6 MB/s 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.040 18:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:02.338 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:02.338 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.338 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:02.338 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:02.338 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:02.338 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.338 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:02.597 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:02.597 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:02.597 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:02.597 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.597 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.597 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:02.597 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:02.597 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.597 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.597 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81969 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81969 ']' 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81969 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81969 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.857 killing process with pid 81969 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81969' 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81969 00:16:02.857 Received shutdown signal, test time was about 60.000000 seconds 00:16:02.857 00:16:02.857 Latency(us) 00:16:02.857 [2024-11-26T18:00:44.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.857 [2024-11-26T18:00:44.720Z] =================================================================================================================== 00:16:02.857 [2024-11-26T18:00:44.720Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:02.857 [2024-11-26 18:00:44.620260] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:02.857 18:00:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81969 00:16:03.426 [2024-11-26 18:00:45.061053] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:04.808 00:16:04.808 real 0m15.776s 00:16:04.808 user 0m19.344s 00:16:04.808 sys 0m2.162s 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.808 ************************************ 00:16:04.808 END TEST raid5f_rebuild_test 00:16:04.808 ************************************ 00:16:04.808 18:00:46 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:04.808 18:00:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:04.808 18:00:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.808 18:00:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:04.808 ************************************ 00:16:04.808 START TEST raid5f_rebuild_test_sb 00:16:04.808 ************************************ 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82423 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82423 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82423 ']' 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.808 18:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.808 [2024-11-26 18:00:46.509761] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:16:04.808 [2024-11-26 18:00:46.509891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82423 ] 00:16:04.808 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:04.808 Zero copy mechanism will not be used. 00:16:05.068 [2024-11-26 18:00:46.689001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.068 [2024-11-26 18:00:46.825465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.328 [2024-11-26 18:00:47.057777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.328 [2024-11-26 18:00:47.057837] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.588 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.588 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:05.588 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:05.588 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:05.588 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.588 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.848 BaseBdev1_malloc 00:16:05.848 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.848 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:05.848 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.848 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.848 [2024-11-26 18:00:47.461965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:05.848 [2024-11-26 18:00:47.462057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.848 [2024-11-26 18:00:47.462085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:05.848 [2024-11-26 18:00:47.462098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.848 [2024-11-26 18:00:47.464587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.848 [2024-11-26 18:00:47.464633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:05.848 BaseBdev1 00:16:05.848 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.848 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:05.848 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:05.848 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.848 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.848 BaseBdev2_malloc 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.849 [2024-11-26 18:00:47.523335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:05.849 [2024-11-26 18:00:47.523422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.849 [2024-11-26 18:00:47.523450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:05.849 [2024-11-26 18:00:47.523463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.849 [2024-11-26 18:00:47.525883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.849 [2024-11-26 18:00:47.525929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:05.849 BaseBdev2 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.849 BaseBdev3_malloc 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.849 [2024-11-26 18:00:47.595917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:05.849 [2024-11-26 18:00:47.595990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.849 [2024-11-26 18:00:47.596030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:05.849 [2024-11-26 18:00:47.596044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.849 [2024-11-26 18:00:47.598485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.849 [2024-11-26 18:00:47.598530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:05.849 BaseBdev3 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.849 spare_malloc 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.849 spare_delay 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.849 [2024-11-26 18:00:47.668530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:05.849 [2024-11-26 18:00:47.668608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.849 [2024-11-26 18:00:47.668632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:05.849 [2024-11-26 18:00:47.668644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.849 [2024-11-26 18:00:47.671187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.849 [2024-11-26 18:00:47.671234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:05.849 spare 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.849 [2024-11-26 18:00:47.680603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.849 [2024-11-26 18:00:47.682708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:05.849 [2024-11-26 18:00:47.682791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:05.849 [2024-11-26 18:00:47.683008] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:05.849 [2024-11-26 18:00:47.683043] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:05.849 [2024-11-26 18:00:47.683372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:05.849 [2024-11-26 18:00:47.690234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:05.849 [2024-11-26 18:00:47.690268] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:05.849 [2024-11-26 18:00:47.690544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.849 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.109 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.109 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.109 "name": "raid_bdev1", 00:16:06.109 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:06.109 "strip_size_kb": 64, 00:16:06.109 "state": "online", 00:16:06.109 "raid_level": "raid5f", 00:16:06.109 "superblock": true, 00:16:06.109 "num_base_bdevs": 3, 00:16:06.109 "num_base_bdevs_discovered": 3, 00:16:06.109 "num_base_bdevs_operational": 3, 00:16:06.109 "base_bdevs_list": [ 00:16:06.109 { 00:16:06.109 "name": "BaseBdev1", 00:16:06.109 "uuid": "8dd95558-54a2-5084-b9ab-bd01b80cfb22", 00:16:06.109 "is_configured": true, 00:16:06.109 "data_offset": 2048, 00:16:06.109 "data_size": 63488 00:16:06.109 }, 00:16:06.109 { 00:16:06.109 "name": "BaseBdev2", 00:16:06.109 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:06.109 "is_configured": true, 00:16:06.109 "data_offset": 2048, 00:16:06.109 "data_size": 63488 00:16:06.109 }, 00:16:06.109 { 00:16:06.109 "name": "BaseBdev3", 00:16:06.109 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:06.109 "is_configured": true, 00:16:06.109 "data_offset": 2048, 00:16:06.109 "data_size": 63488 00:16:06.109 } 00:16:06.109 ] 00:16:06.109 }' 00:16:06.109 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.109 18:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.368 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:06.368 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:06.368 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.368 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.368 [2024-11-26 18:00:48.173664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.368 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.368 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:06.368 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:06.368 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.368 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.368 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.629 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.629 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:06.629 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:06.629 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:06.630 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:06.630 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:06.630 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.630 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:06.630 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:06.630 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:06.630 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:06.630 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:06.630 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:06.630 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.630 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:06.630 [2024-11-26 18:00:48.476977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:06.894 /dev/nbd0 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.894 1+0 records in 00:16:06.894 1+0 records out 00:16:06.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500744 s, 8.2 MB/s 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:06.894 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:06.895 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.895 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.895 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:06.895 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:06.895 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:06.895 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:07.154 496+0 records in 00:16:07.154 496+0 records out 00:16:07.154 65011712 bytes (65 MB, 62 MiB) copied, 0.427538 s, 152 MB/s 00:16:07.154 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:07.154 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.154 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:07.154 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.154 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:07.154 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.154 18:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:07.413 [2024-11-26 18:00:49.226175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.413 [2024-11-26 18:00:49.246351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.413 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.672 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.672 "name": "raid_bdev1", 00:16:07.672 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:07.672 "strip_size_kb": 64, 00:16:07.672 "state": "online", 00:16:07.672 "raid_level": "raid5f", 00:16:07.672 "superblock": true, 00:16:07.672 "num_base_bdevs": 3, 00:16:07.672 "num_base_bdevs_discovered": 2, 00:16:07.672 "num_base_bdevs_operational": 2, 00:16:07.672 "base_bdevs_list": [ 00:16:07.672 { 00:16:07.672 "name": null, 00:16:07.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.672 "is_configured": false, 00:16:07.672 "data_offset": 0, 00:16:07.672 "data_size": 63488 00:16:07.672 }, 00:16:07.672 { 00:16:07.672 "name": "BaseBdev2", 00:16:07.672 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:07.672 "is_configured": true, 00:16:07.672 "data_offset": 2048, 00:16:07.672 "data_size": 63488 00:16:07.672 }, 00:16:07.672 { 00:16:07.672 "name": "BaseBdev3", 00:16:07.672 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:07.672 "is_configured": true, 00:16:07.672 "data_offset": 2048, 00:16:07.672 "data_size": 63488 00:16:07.672 } 00:16:07.672 ] 00:16:07.672 }' 00:16:07.672 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.672 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.930 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.930 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.930 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.930 [2024-11-26 18:00:49.757578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.931 [2024-11-26 18:00:49.776012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:07.931 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.931 18:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:07.931 [2024-11-26 18:00:49.784583] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.310 "name": "raid_bdev1", 00:16:09.310 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:09.310 "strip_size_kb": 64, 00:16:09.310 "state": "online", 00:16:09.310 "raid_level": "raid5f", 00:16:09.310 "superblock": true, 00:16:09.310 "num_base_bdevs": 3, 00:16:09.310 "num_base_bdevs_discovered": 3, 00:16:09.310 "num_base_bdevs_operational": 3, 00:16:09.310 "process": { 00:16:09.310 "type": "rebuild", 00:16:09.310 "target": "spare", 00:16:09.310 "progress": { 00:16:09.310 "blocks": 20480, 00:16:09.310 "percent": 16 00:16:09.310 } 00:16:09.310 }, 00:16:09.310 "base_bdevs_list": [ 00:16:09.310 { 00:16:09.310 "name": "spare", 00:16:09.310 "uuid": "d2eaa142-4aac-5196-9df3-d2410d3480d5", 00:16:09.310 "is_configured": true, 00:16:09.310 "data_offset": 2048, 00:16:09.310 "data_size": 63488 00:16:09.310 }, 00:16:09.310 { 00:16:09.310 "name": "BaseBdev2", 00:16:09.310 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:09.310 "is_configured": true, 00:16:09.310 "data_offset": 2048, 00:16:09.310 "data_size": 63488 00:16:09.310 }, 00:16:09.310 { 00:16:09.310 "name": "BaseBdev3", 00:16:09.310 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:09.310 "is_configured": true, 00:16:09.310 "data_offset": 2048, 00:16:09.310 "data_size": 63488 00:16:09.310 } 00:16:09.310 ] 00:16:09.310 }' 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.310 18:00:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.310 [2024-11-26 18:00:50.940155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.310 [2024-11-26 18:00:50.996329] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:09.310 [2024-11-26 18:00:50.996408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.310 [2024-11-26 18:00:50.996450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.310 [2024-11-26 18:00:50.996459] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.310 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.310 "name": "raid_bdev1", 00:16:09.310 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:09.310 "strip_size_kb": 64, 00:16:09.310 "state": "online", 00:16:09.310 "raid_level": "raid5f", 00:16:09.310 "superblock": true, 00:16:09.310 "num_base_bdevs": 3, 00:16:09.310 "num_base_bdevs_discovered": 2, 00:16:09.310 "num_base_bdevs_operational": 2, 00:16:09.310 "base_bdevs_list": [ 00:16:09.310 { 00:16:09.310 "name": null, 00:16:09.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.310 "is_configured": false, 00:16:09.310 "data_offset": 0, 00:16:09.310 "data_size": 63488 00:16:09.310 }, 00:16:09.310 { 00:16:09.310 "name": "BaseBdev2", 00:16:09.310 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:09.310 "is_configured": true, 00:16:09.310 "data_offset": 2048, 00:16:09.310 "data_size": 63488 00:16:09.310 }, 00:16:09.310 { 00:16:09.310 "name": "BaseBdev3", 00:16:09.310 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:09.310 "is_configured": true, 00:16:09.310 "data_offset": 2048, 00:16:09.311 "data_size": 63488 00:16:09.311 } 00:16:09.311 ] 00:16:09.311 }' 00:16:09.311 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.311 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.878 "name": "raid_bdev1", 00:16:09.878 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:09.878 "strip_size_kb": 64, 00:16:09.878 "state": "online", 00:16:09.878 "raid_level": "raid5f", 00:16:09.878 "superblock": true, 00:16:09.878 "num_base_bdevs": 3, 00:16:09.878 "num_base_bdevs_discovered": 2, 00:16:09.878 "num_base_bdevs_operational": 2, 00:16:09.878 "base_bdevs_list": [ 00:16:09.878 { 00:16:09.878 "name": null, 00:16:09.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.878 "is_configured": false, 00:16:09.878 "data_offset": 0, 00:16:09.878 "data_size": 63488 00:16:09.878 }, 00:16:09.878 { 00:16:09.878 "name": "BaseBdev2", 00:16:09.878 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:09.878 "is_configured": true, 00:16:09.878 "data_offset": 2048, 00:16:09.878 "data_size": 63488 00:16:09.878 }, 00:16:09.878 { 00:16:09.878 "name": "BaseBdev3", 00:16:09.878 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:09.878 "is_configured": true, 00:16:09.878 "data_offset": 2048, 00:16:09.878 "data_size": 63488 00:16:09.878 } 00:16:09.878 ] 00:16:09.878 }' 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.878 [2024-11-26 18:00:51.632870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.878 [2024-11-26 18:00:51.651847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.878 18:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:09.878 [2024-11-26 18:00:51.660614] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.817 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.817 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.817 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.817 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.817 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.817 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.817 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.817 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.817 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.076 "name": "raid_bdev1", 00:16:11.076 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:11.076 "strip_size_kb": 64, 00:16:11.076 "state": "online", 00:16:11.076 "raid_level": "raid5f", 00:16:11.076 "superblock": true, 00:16:11.076 "num_base_bdevs": 3, 00:16:11.076 "num_base_bdevs_discovered": 3, 00:16:11.076 "num_base_bdevs_operational": 3, 00:16:11.076 "process": { 00:16:11.076 "type": "rebuild", 00:16:11.076 "target": "spare", 00:16:11.076 "progress": { 00:16:11.076 "blocks": 20480, 00:16:11.076 "percent": 16 00:16:11.076 } 00:16:11.076 }, 00:16:11.076 "base_bdevs_list": [ 00:16:11.076 { 00:16:11.076 "name": "spare", 00:16:11.076 "uuid": "d2eaa142-4aac-5196-9df3-d2410d3480d5", 00:16:11.076 "is_configured": true, 00:16:11.076 "data_offset": 2048, 00:16:11.076 "data_size": 63488 00:16:11.076 }, 00:16:11.076 { 00:16:11.076 "name": "BaseBdev2", 00:16:11.076 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:11.076 "is_configured": true, 00:16:11.076 "data_offset": 2048, 00:16:11.076 "data_size": 63488 00:16:11.076 }, 00:16:11.076 { 00:16:11.076 "name": "BaseBdev3", 00:16:11.076 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:11.076 "is_configured": true, 00:16:11.076 "data_offset": 2048, 00:16:11.076 "data_size": 63488 00:16:11.076 } 00:16:11.076 ] 00:16:11.076 }' 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:11.076 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=593 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.076 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.076 "name": "raid_bdev1", 00:16:11.076 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:11.076 "strip_size_kb": 64, 00:16:11.076 "state": "online", 00:16:11.076 "raid_level": "raid5f", 00:16:11.076 "superblock": true, 00:16:11.076 "num_base_bdevs": 3, 00:16:11.076 "num_base_bdevs_discovered": 3, 00:16:11.076 "num_base_bdevs_operational": 3, 00:16:11.076 "process": { 00:16:11.076 "type": "rebuild", 00:16:11.076 "target": "spare", 00:16:11.076 "progress": { 00:16:11.076 "blocks": 22528, 00:16:11.076 "percent": 17 00:16:11.076 } 00:16:11.076 }, 00:16:11.076 "base_bdevs_list": [ 00:16:11.076 { 00:16:11.076 "name": "spare", 00:16:11.076 "uuid": "d2eaa142-4aac-5196-9df3-d2410d3480d5", 00:16:11.076 "is_configured": true, 00:16:11.076 "data_offset": 2048, 00:16:11.076 "data_size": 63488 00:16:11.076 }, 00:16:11.076 { 00:16:11.076 "name": "BaseBdev2", 00:16:11.076 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:11.076 "is_configured": true, 00:16:11.076 "data_offset": 2048, 00:16:11.076 "data_size": 63488 00:16:11.076 }, 00:16:11.076 { 00:16:11.076 "name": "BaseBdev3", 00:16:11.076 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:11.076 "is_configured": true, 00:16:11.076 "data_offset": 2048, 00:16:11.076 "data_size": 63488 00:16:11.076 } 00:16:11.076 ] 00:16:11.076 }' 00:16:11.077 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.077 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.077 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.336 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.336 18:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.276 18:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.276 18:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.276 18:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.276 18:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.276 18:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.276 18:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.276 18:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.276 18:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.276 18:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.276 18:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.276 18:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.276 18:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.276 "name": "raid_bdev1", 00:16:12.276 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:12.276 "strip_size_kb": 64, 00:16:12.276 "state": "online", 00:16:12.276 "raid_level": "raid5f", 00:16:12.276 "superblock": true, 00:16:12.276 "num_base_bdevs": 3, 00:16:12.276 "num_base_bdevs_discovered": 3, 00:16:12.276 "num_base_bdevs_operational": 3, 00:16:12.276 "process": { 00:16:12.276 "type": "rebuild", 00:16:12.276 "target": "spare", 00:16:12.276 "progress": { 00:16:12.276 "blocks": 45056, 00:16:12.276 "percent": 35 00:16:12.276 } 00:16:12.276 }, 00:16:12.276 "base_bdevs_list": [ 00:16:12.276 { 00:16:12.276 "name": "spare", 00:16:12.276 "uuid": "d2eaa142-4aac-5196-9df3-d2410d3480d5", 00:16:12.276 "is_configured": true, 00:16:12.276 "data_offset": 2048, 00:16:12.276 "data_size": 63488 00:16:12.276 }, 00:16:12.276 { 00:16:12.276 "name": "BaseBdev2", 00:16:12.276 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:12.276 "is_configured": true, 00:16:12.276 "data_offset": 2048, 00:16:12.276 "data_size": 63488 00:16:12.276 }, 00:16:12.276 { 00:16:12.276 "name": "BaseBdev3", 00:16:12.276 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:12.276 "is_configured": true, 00:16:12.276 "data_offset": 2048, 00:16:12.276 "data_size": 63488 00:16:12.276 } 00:16:12.276 ] 00:16:12.276 }' 00:16:12.276 18:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.276 18:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.276 18:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.276 18:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.276 18:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.743 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.743 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.743 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.743 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.743 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.743 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.743 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.743 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.743 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.743 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.743 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.743 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.743 "name": "raid_bdev1", 00:16:13.743 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:13.743 "strip_size_kb": 64, 00:16:13.743 "state": "online", 00:16:13.743 "raid_level": "raid5f", 00:16:13.743 "superblock": true, 00:16:13.743 "num_base_bdevs": 3, 00:16:13.743 "num_base_bdevs_discovered": 3, 00:16:13.743 "num_base_bdevs_operational": 3, 00:16:13.743 "process": { 00:16:13.743 "type": "rebuild", 00:16:13.743 "target": "spare", 00:16:13.743 "progress": { 00:16:13.743 "blocks": 69632, 00:16:13.743 "percent": 54 00:16:13.743 } 00:16:13.743 }, 00:16:13.743 "base_bdevs_list": [ 00:16:13.743 { 00:16:13.743 "name": "spare", 00:16:13.744 "uuid": "d2eaa142-4aac-5196-9df3-d2410d3480d5", 00:16:13.744 "is_configured": true, 00:16:13.744 "data_offset": 2048, 00:16:13.744 "data_size": 63488 00:16:13.744 }, 00:16:13.744 { 00:16:13.744 "name": "BaseBdev2", 00:16:13.744 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:13.744 "is_configured": true, 00:16:13.744 "data_offset": 2048, 00:16:13.744 "data_size": 63488 00:16:13.744 }, 00:16:13.744 { 00:16:13.744 "name": "BaseBdev3", 00:16:13.744 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:13.744 "is_configured": true, 00:16:13.744 "data_offset": 2048, 00:16:13.744 "data_size": 63488 00:16:13.744 } 00:16:13.744 ] 00:16:13.744 }' 00:16:13.744 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.744 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.744 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.744 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.744 18:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.681 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.681 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.681 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.681 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.681 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.681 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.681 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.681 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.681 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.681 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.681 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.681 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.681 "name": "raid_bdev1", 00:16:14.681 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:14.681 "strip_size_kb": 64, 00:16:14.681 "state": "online", 00:16:14.681 "raid_level": "raid5f", 00:16:14.681 "superblock": true, 00:16:14.681 "num_base_bdevs": 3, 00:16:14.681 "num_base_bdevs_discovered": 3, 00:16:14.681 "num_base_bdevs_operational": 3, 00:16:14.681 "process": { 00:16:14.682 "type": "rebuild", 00:16:14.682 "target": "spare", 00:16:14.682 "progress": { 00:16:14.682 "blocks": 92160, 00:16:14.682 "percent": 72 00:16:14.682 } 00:16:14.682 }, 00:16:14.682 "base_bdevs_list": [ 00:16:14.682 { 00:16:14.682 "name": "spare", 00:16:14.682 "uuid": "d2eaa142-4aac-5196-9df3-d2410d3480d5", 00:16:14.682 "is_configured": true, 00:16:14.682 "data_offset": 2048, 00:16:14.682 "data_size": 63488 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "name": "BaseBdev2", 00:16:14.682 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:14.682 "is_configured": true, 00:16:14.682 "data_offset": 2048, 00:16:14.682 "data_size": 63488 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "name": "BaseBdev3", 00:16:14.682 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:14.682 "is_configured": true, 00:16:14.682 "data_offset": 2048, 00:16:14.682 "data_size": 63488 00:16:14.682 } 00:16:14.682 ] 00:16:14.682 }' 00:16:14.682 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.682 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.682 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.682 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.682 18:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.620 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.620 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.620 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.620 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.620 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.620 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.620 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.620 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.620 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.620 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.620 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.620 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.620 "name": "raid_bdev1", 00:16:15.620 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:15.620 "strip_size_kb": 64, 00:16:15.620 "state": "online", 00:16:15.620 "raid_level": "raid5f", 00:16:15.620 "superblock": true, 00:16:15.620 "num_base_bdevs": 3, 00:16:15.620 "num_base_bdevs_discovered": 3, 00:16:15.620 "num_base_bdevs_operational": 3, 00:16:15.620 "process": { 00:16:15.620 "type": "rebuild", 00:16:15.620 "target": "spare", 00:16:15.620 "progress": { 00:16:15.620 "blocks": 116736, 00:16:15.620 "percent": 91 00:16:15.620 } 00:16:15.620 }, 00:16:15.620 "base_bdevs_list": [ 00:16:15.620 { 00:16:15.620 "name": "spare", 00:16:15.620 "uuid": "d2eaa142-4aac-5196-9df3-d2410d3480d5", 00:16:15.620 "is_configured": true, 00:16:15.620 "data_offset": 2048, 00:16:15.620 "data_size": 63488 00:16:15.620 }, 00:16:15.620 { 00:16:15.620 "name": "BaseBdev2", 00:16:15.620 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:15.620 "is_configured": true, 00:16:15.620 "data_offset": 2048, 00:16:15.620 "data_size": 63488 00:16:15.620 }, 00:16:15.620 { 00:16:15.620 "name": "BaseBdev3", 00:16:15.620 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:15.620 "is_configured": true, 00:16:15.620 "data_offset": 2048, 00:16:15.620 "data_size": 63488 00:16:15.620 } 00:16:15.620 ] 00:16:15.620 }' 00:16:15.621 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.879 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.879 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.879 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.879 18:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.138 [2024-11-26 18:00:57.922157] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:16.138 [2024-11-26 18:00:57.922286] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:16.138 [2024-11-26 18:00:57.922466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.707 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.707 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.707 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.707 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.707 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.707 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.707 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.707 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.707 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.707 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.707 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.967 "name": "raid_bdev1", 00:16:16.967 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:16.967 "strip_size_kb": 64, 00:16:16.967 "state": "online", 00:16:16.967 "raid_level": "raid5f", 00:16:16.967 "superblock": true, 00:16:16.967 "num_base_bdevs": 3, 00:16:16.967 "num_base_bdevs_discovered": 3, 00:16:16.967 "num_base_bdevs_operational": 3, 00:16:16.967 "base_bdevs_list": [ 00:16:16.967 { 00:16:16.967 "name": "spare", 00:16:16.967 "uuid": "d2eaa142-4aac-5196-9df3-d2410d3480d5", 00:16:16.967 "is_configured": true, 00:16:16.967 "data_offset": 2048, 00:16:16.967 "data_size": 63488 00:16:16.967 }, 00:16:16.967 { 00:16:16.967 "name": "BaseBdev2", 00:16:16.967 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:16.967 "is_configured": true, 00:16:16.967 "data_offset": 2048, 00:16:16.967 "data_size": 63488 00:16:16.967 }, 00:16:16.967 { 00:16:16.967 "name": "BaseBdev3", 00:16:16.967 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:16.967 "is_configured": true, 00:16:16.967 "data_offset": 2048, 00:16:16.967 "data_size": 63488 00:16:16.967 } 00:16:16.967 ] 00:16:16.967 }' 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.967 "name": "raid_bdev1", 00:16:16.967 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:16.967 "strip_size_kb": 64, 00:16:16.967 "state": "online", 00:16:16.967 "raid_level": "raid5f", 00:16:16.967 "superblock": true, 00:16:16.967 "num_base_bdevs": 3, 00:16:16.967 "num_base_bdevs_discovered": 3, 00:16:16.967 "num_base_bdevs_operational": 3, 00:16:16.967 "base_bdevs_list": [ 00:16:16.967 { 00:16:16.967 "name": "spare", 00:16:16.967 "uuid": "d2eaa142-4aac-5196-9df3-d2410d3480d5", 00:16:16.967 "is_configured": true, 00:16:16.967 "data_offset": 2048, 00:16:16.967 "data_size": 63488 00:16:16.967 }, 00:16:16.967 { 00:16:16.967 "name": "BaseBdev2", 00:16:16.967 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:16.967 "is_configured": true, 00:16:16.967 "data_offset": 2048, 00:16:16.967 "data_size": 63488 00:16:16.967 }, 00:16:16.967 { 00:16:16.967 "name": "BaseBdev3", 00:16:16.967 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:16.967 "is_configured": true, 00:16:16.967 "data_offset": 2048, 00:16:16.967 "data_size": 63488 00:16:16.967 } 00:16:16.967 ] 00:16:16.967 }' 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.967 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:16.968 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:16.968 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.968 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.968 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.968 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.968 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.968 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.968 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.968 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.968 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.968 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.968 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.968 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.968 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.228 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.228 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.228 "name": "raid_bdev1", 00:16:17.228 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:17.228 "strip_size_kb": 64, 00:16:17.228 "state": "online", 00:16:17.228 "raid_level": "raid5f", 00:16:17.228 "superblock": true, 00:16:17.228 "num_base_bdevs": 3, 00:16:17.228 "num_base_bdevs_discovered": 3, 00:16:17.228 "num_base_bdevs_operational": 3, 00:16:17.228 "base_bdevs_list": [ 00:16:17.228 { 00:16:17.228 "name": "spare", 00:16:17.228 "uuid": "d2eaa142-4aac-5196-9df3-d2410d3480d5", 00:16:17.228 "is_configured": true, 00:16:17.228 "data_offset": 2048, 00:16:17.228 "data_size": 63488 00:16:17.228 }, 00:16:17.228 { 00:16:17.228 "name": "BaseBdev2", 00:16:17.228 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:17.228 "is_configured": true, 00:16:17.228 "data_offset": 2048, 00:16:17.228 "data_size": 63488 00:16:17.228 }, 00:16:17.228 { 00:16:17.228 "name": "BaseBdev3", 00:16:17.228 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:17.228 "is_configured": true, 00:16:17.228 "data_offset": 2048, 00:16:17.228 "data_size": 63488 00:16:17.228 } 00:16:17.228 ] 00:16:17.228 }' 00:16:17.228 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.228 18:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.488 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:17.488 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.488 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.488 [2024-11-26 18:00:59.227289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.488 [2024-11-26 18:00:59.227331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.488 [2024-11-26 18:00:59.227444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.488 [2024-11-26 18:00:59.227541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.488 [2024-11-26 18:00:59.227568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:17.488 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.488 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:17.488 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.488 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.488 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.488 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.488 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:17.488 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:17.488 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:17.488 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:17.488 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:17.489 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:17.489 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:17.489 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:17.489 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:17.489 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:17.489 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:17.489 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:17.489 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:17.748 /dev/nbd0 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:17.748 1+0 records in 00:16:17.748 1+0 records out 00:16:17.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236574 s, 17.3 MB/s 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:17.748 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:18.007 /dev/nbd1 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:18.007 1+0 records in 00:16:18.007 1+0 records out 00:16:18.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465781 s, 8.8 MB/s 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:18.007 18:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:18.267 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:18.267 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.267 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:18.267 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:18.267 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:18.267 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.267 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:18.528 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:18.528 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:18.528 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:18.528 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.528 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.528 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:18.528 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:18.528 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.528 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.528 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.787 [2024-11-26 18:01:00.528448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:18.787 [2024-11-26 18:01:00.528527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.787 [2024-11-26 18:01:00.528553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:18.787 [2024-11-26 18:01:00.528568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.787 [2024-11-26 18:01:00.531467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.787 [2024-11-26 18:01:00.531516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:18.787 [2024-11-26 18:01:00.531647] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:18.787 [2024-11-26 18:01:00.531730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:18.787 [2024-11-26 18:01:00.531950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.787 [2024-11-26 18:01:00.532133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.787 spare 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.787 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.787 [2024-11-26 18:01:00.632091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:18.787 [2024-11-26 18:01:00.632168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:18.787 [2024-11-26 18:01:00.632619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:18.787 [2024-11-26 18:01:00.639854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:18.787 [2024-11-26 18:01:00.639887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:18.787 [2024-11-26 18:01:00.640195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.047 "name": "raid_bdev1", 00:16:19.047 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:19.047 "strip_size_kb": 64, 00:16:19.047 "state": "online", 00:16:19.047 "raid_level": "raid5f", 00:16:19.047 "superblock": true, 00:16:19.047 "num_base_bdevs": 3, 00:16:19.047 "num_base_bdevs_discovered": 3, 00:16:19.047 "num_base_bdevs_operational": 3, 00:16:19.047 "base_bdevs_list": [ 00:16:19.047 { 00:16:19.047 "name": "spare", 00:16:19.047 "uuid": "d2eaa142-4aac-5196-9df3-d2410d3480d5", 00:16:19.047 "is_configured": true, 00:16:19.047 "data_offset": 2048, 00:16:19.047 "data_size": 63488 00:16:19.047 }, 00:16:19.047 { 00:16:19.047 "name": "BaseBdev2", 00:16:19.047 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:19.047 "is_configured": true, 00:16:19.047 "data_offset": 2048, 00:16:19.047 "data_size": 63488 00:16:19.047 }, 00:16:19.047 { 00:16:19.047 "name": "BaseBdev3", 00:16:19.047 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:19.047 "is_configured": true, 00:16:19.047 "data_offset": 2048, 00:16:19.047 "data_size": 63488 00:16:19.047 } 00:16:19.047 ] 00:16:19.047 }' 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.047 18:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.307 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:19.307 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.307 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:19.307 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:19.307 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.307 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.307 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.307 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.307 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.307 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.307 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.307 "name": "raid_bdev1", 00:16:19.307 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:19.307 "strip_size_kb": 64, 00:16:19.307 "state": "online", 00:16:19.307 "raid_level": "raid5f", 00:16:19.307 "superblock": true, 00:16:19.307 "num_base_bdevs": 3, 00:16:19.307 "num_base_bdevs_discovered": 3, 00:16:19.307 "num_base_bdevs_operational": 3, 00:16:19.307 "base_bdevs_list": [ 00:16:19.307 { 00:16:19.307 "name": "spare", 00:16:19.307 "uuid": "d2eaa142-4aac-5196-9df3-d2410d3480d5", 00:16:19.307 "is_configured": true, 00:16:19.307 "data_offset": 2048, 00:16:19.307 "data_size": 63488 00:16:19.307 }, 00:16:19.307 { 00:16:19.307 "name": "BaseBdev2", 00:16:19.307 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:19.307 "is_configured": true, 00:16:19.307 "data_offset": 2048, 00:16:19.307 "data_size": 63488 00:16:19.307 }, 00:16:19.307 { 00:16:19.307 "name": "BaseBdev3", 00:16:19.307 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:19.307 "is_configured": true, 00:16:19.307 "data_offset": 2048, 00:16:19.307 "data_size": 63488 00:16:19.307 } 00:16:19.307 ] 00:16:19.307 }' 00:16:19.307 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.566 [2024-11-26 18:01:01.323466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.566 "name": "raid_bdev1", 00:16:19.566 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:19.566 "strip_size_kb": 64, 00:16:19.566 "state": "online", 00:16:19.566 "raid_level": "raid5f", 00:16:19.566 "superblock": true, 00:16:19.566 "num_base_bdevs": 3, 00:16:19.566 "num_base_bdevs_discovered": 2, 00:16:19.566 "num_base_bdevs_operational": 2, 00:16:19.566 "base_bdevs_list": [ 00:16:19.566 { 00:16:19.566 "name": null, 00:16:19.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.566 "is_configured": false, 00:16:19.566 "data_offset": 0, 00:16:19.566 "data_size": 63488 00:16:19.566 }, 00:16:19.566 { 00:16:19.566 "name": "BaseBdev2", 00:16:19.566 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:19.566 "is_configured": true, 00:16:19.566 "data_offset": 2048, 00:16:19.566 "data_size": 63488 00:16:19.566 }, 00:16:19.566 { 00:16:19.566 "name": "BaseBdev3", 00:16:19.566 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:19.566 "is_configured": true, 00:16:19.566 "data_offset": 2048, 00:16:19.566 "data_size": 63488 00:16:19.566 } 00:16:19.566 ] 00:16:19.566 }' 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.566 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.133 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:20.133 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.133 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.133 [2024-11-26 18:01:01.815094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.133 [2024-11-26 18:01:01.815368] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:20.133 [2024-11-26 18:01:01.815401] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:20.133 [2024-11-26 18:01:01.815451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.134 [2024-11-26 18:01:01.835221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:20.134 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.134 18:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:20.134 [2024-11-26 18:01:01.845114] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:21.080 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.080 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.080 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.080 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.080 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.080 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.080 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.080 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.080 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.080 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.080 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.080 "name": "raid_bdev1", 00:16:21.080 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:21.080 "strip_size_kb": 64, 00:16:21.080 "state": "online", 00:16:21.080 "raid_level": "raid5f", 00:16:21.080 "superblock": true, 00:16:21.080 "num_base_bdevs": 3, 00:16:21.080 "num_base_bdevs_discovered": 3, 00:16:21.080 "num_base_bdevs_operational": 3, 00:16:21.080 "process": { 00:16:21.081 "type": "rebuild", 00:16:21.081 "target": "spare", 00:16:21.081 "progress": { 00:16:21.081 "blocks": 18432, 00:16:21.081 "percent": 14 00:16:21.081 } 00:16:21.081 }, 00:16:21.081 "base_bdevs_list": [ 00:16:21.081 { 00:16:21.081 "name": "spare", 00:16:21.081 "uuid": "d2eaa142-4aac-5196-9df3-d2410d3480d5", 00:16:21.081 "is_configured": true, 00:16:21.081 "data_offset": 2048, 00:16:21.081 "data_size": 63488 00:16:21.081 }, 00:16:21.081 { 00:16:21.081 "name": "BaseBdev2", 00:16:21.081 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:21.081 "is_configured": true, 00:16:21.081 "data_offset": 2048, 00:16:21.081 "data_size": 63488 00:16:21.081 }, 00:16:21.081 { 00:16:21.081 "name": "BaseBdev3", 00:16:21.081 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:21.081 "is_configured": true, 00:16:21.081 "data_offset": 2048, 00:16:21.081 "data_size": 63488 00:16:21.081 } 00:16:21.081 ] 00:16:21.081 }' 00:16:21.081 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.081 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.081 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.341 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.341 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:21.341 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.341 18:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.341 [2024-11-26 18:01:02.989797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:21.341 [2024-11-26 18:01:03.058679] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:21.341 [2024-11-26 18:01:03.058803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.341 [2024-11-26 18:01:03.058827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:21.341 [2024-11-26 18:01:03.058840] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.341 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.342 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.342 "name": "raid_bdev1", 00:16:21.342 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:21.342 "strip_size_kb": 64, 00:16:21.342 "state": "online", 00:16:21.342 "raid_level": "raid5f", 00:16:21.342 "superblock": true, 00:16:21.342 "num_base_bdevs": 3, 00:16:21.342 "num_base_bdevs_discovered": 2, 00:16:21.342 "num_base_bdevs_operational": 2, 00:16:21.342 "base_bdevs_list": [ 00:16:21.342 { 00:16:21.342 "name": null, 00:16:21.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.342 "is_configured": false, 00:16:21.342 "data_offset": 0, 00:16:21.342 "data_size": 63488 00:16:21.342 }, 00:16:21.342 { 00:16:21.342 "name": "BaseBdev2", 00:16:21.342 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:21.342 "is_configured": true, 00:16:21.342 "data_offset": 2048, 00:16:21.342 "data_size": 63488 00:16:21.342 }, 00:16:21.342 { 00:16:21.342 "name": "BaseBdev3", 00:16:21.342 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:21.342 "is_configured": true, 00:16:21.342 "data_offset": 2048, 00:16:21.342 "data_size": 63488 00:16:21.342 } 00:16:21.342 ] 00:16:21.342 }' 00:16:21.342 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.342 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.909 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:21.909 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.909 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.909 [2024-11-26 18:01:03.549639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:21.909 [2024-11-26 18:01:03.549733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.909 [2024-11-26 18:01:03.549762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:21.909 [2024-11-26 18:01:03.549782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.909 [2024-11-26 18:01:03.550474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.909 [2024-11-26 18:01:03.550523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:21.909 [2024-11-26 18:01:03.550668] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:21.909 [2024-11-26 18:01:03.550699] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:21.909 [2024-11-26 18:01:03.550712] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:21.909 [2024-11-26 18:01:03.550756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.909 [2024-11-26 18:01:03.571341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:21.909 spare 00:16:21.909 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.909 18:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:21.909 [2024-11-26 18:01:03.581739] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:22.844 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.844 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.844 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.844 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.844 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.844 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.844 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.844 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.844 "name": "raid_bdev1", 00:16:22.844 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:22.844 "strip_size_kb": 64, 00:16:22.844 "state": "online", 00:16:22.844 "raid_level": "raid5f", 00:16:22.844 "superblock": true, 00:16:22.844 "num_base_bdevs": 3, 00:16:22.844 "num_base_bdevs_discovered": 3, 00:16:22.844 "num_base_bdevs_operational": 3, 00:16:22.844 "process": { 00:16:22.844 "type": "rebuild", 00:16:22.844 "target": "spare", 00:16:22.844 "progress": { 00:16:22.844 "blocks": 18432, 00:16:22.844 "percent": 14 00:16:22.844 } 00:16:22.844 }, 00:16:22.844 "base_bdevs_list": [ 00:16:22.844 { 00:16:22.844 "name": "spare", 00:16:22.844 "uuid": "d2eaa142-4aac-5196-9df3-d2410d3480d5", 00:16:22.844 "is_configured": true, 00:16:22.844 "data_offset": 2048, 00:16:22.844 "data_size": 63488 00:16:22.844 }, 00:16:22.844 { 00:16:22.844 "name": "BaseBdev2", 00:16:22.844 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:22.844 "is_configured": true, 00:16:22.844 "data_offset": 2048, 00:16:22.844 "data_size": 63488 00:16:22.844 }, 00:16:22.844 { 00:16:22.844 "name": "BaseBdev3", 00:16:22.844 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:22.844 "is_configured": true, 00:16:22.844 "data_offset": 2048, 00:16:22.844 "data_size": 63488 00:16:22.844 } 00:16:22.844 ] 00:16:22.844 }' 00:16:22.844 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.844 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.844 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.103 [2024-11-26 18:01:04.726808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.103 [2024-11-26 18:01:04.794934] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:23.103 [2024-11-26 18:01:04.795011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.103 [2024-11-26 18:01:04.795045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.103 [2024-11-26 18:01:04.795055] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.103 "name": "raid_bdev1", 00:16:23.103 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:23.103 "strip_size_kb": 64, 00:16:23.103 "state": "online", 00:16:23.103 "raid_level": "raid5f", 00:16:23.103 "superblock": true, 00:16:23.103 "num_base_bdevs": 3, 00:16:23.103 "num_base_bdevs_discovered": 2, 00:16:23.103 "num_base_bdevs_operational": 2, 00:16:23.103 "base_bdevs_list": [ 00:16:23.103 { 00:16:23.103 "name": null, 00:16:23.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.103 "is_configured": false, 00:16:23.103 "data_offset": 0, 00:16:23.103 "data_size": 63488 00:16:23.103 }, 00:16:23.103 { 00:16:23.103 "name": "BaseBdev2", 00:16:23.103 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:23.103 "is_configured": true, 00:16:23.103 "data_offset": 2048, 00:16:23.103 "data_size": 63488 00:16:23.103 }, 00:16:23.103 { 00:16:23.103 "name": "BaseBdev3", 00:16:23.103 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:23.103 "is_configured": true, 00:16:23.103 "data_offset": 2048, 00:16:23.103 "data_size": 63488 00:16:23.103 } 00:16:23.103 ] 00:16:23.103 }' 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.103 18:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.672 "name": "raid_bdev1", 00:16:23.672 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:23.672 "strip_size_kb": 64, 00:16:23.672 "state": "online", 00:16:23.672 "raid_level": "raid5f", 00:16:23.672 "superblock": true, 00:16:23.672 "num_base_bdevs": 3, 00:16:23.672 "num_base_bdevs_discovered": 2, 00:16:23.672 "num_base_bdevs_operational": 2, 00:16:23.672 "base_bdevs_list": [ 00:16:23.672 { 00:16:23.672 "name": null, 00:16:23.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.672 "is_configured": false, 00:16:23.672 "data_offset": 0, 00:16:23.672 "data_size": 63488 00:16:23.672 }, 00:16:23.672 { 00:16:23.672 "name": "BaseBdev2", 00:16:23.672 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:23.672 "is_configured": true, 00:16:23.672 "data_offset": 2048, 00:16:23.672 "data_size": 63488 00:16:23.672 }, 00:16:23.672 { 00:16:23.672 "name": "BaseBdev3", 00:16:23.672 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:23.672 "is_configured": true, 00:16:23.672 "data_offset": 2048, 00:16:23.672 "data_size": 63488 00:16:23.672 } 00:16:23.672 ] 00:16:23.672 }' 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.672 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.672 [2024-11-26 18:01:05.497079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:23.673 [2024-11-26 18:01:05.497152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.673 [2024-11-26 18:01:05.497185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:23.673 [2024-11-26 18:01:05.497201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.673 [2024-11-26 18:01:05.497846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.673 [2024-11-26 18:01:05.497882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:23.673 [2024-11-26 18:01:05.498015] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:23.673 [2024-11-26 18:01:05.498059] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:23.673 [2024-11-26 18:01:05.498102] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:23.673 [2024-11-26 18:01:05.498123] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:23.673 BaseBdev1 00:16:23.673 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.673 18:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.050 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.050 "name": "raid_bdev1", 00:16:25.050 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:25.050 "strip_size_kb": 64, 00:16:25.050 "state": "online", 00:16:25.050 "raid_level": "raid5f", 00:16:25.050 "superblock": true, 00:16:25.050 "num_base_bdevs": 3, 00:16:25.050 "num_base_bdevs_discovered": 2, 00:16:25.050 "num_base_bdevs_operational": 2, 00:16:25.050 "base_bdevs_list": [ 00:16:25.050 { 00:16:25.050 "name": null, 00:16:25.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.051 "is_configured": false, 00:16:25.051 "data_offset": 0, 00:16:25.051 "data_size": 63488 00:16:25.051 }, 00:16:25.051 { 00:16:25.051 "name": "BaseBdev2", 00:16:25.051 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:25.051 "is_configured": true, 00:16:25.051 "data_offset": 2048, 00:16:25.051 "data_size": 63488 00:16:25.051 }, 00:16:25.051 { 00:16:25.051 "name": "BaseBdev3", 00:16:25.051 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:25.051 "is_configured": true, 00:16:25.051 "data_offset": 2048, 00:16:25.051 "data_size": 63488 00:16:25.051 } 00:16:25.051 ] 00:16:25.051 }' 00:16:25.051 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.051 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.309 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.309 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.310 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.310 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.310 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.310 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.310 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.310 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.310 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.310 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.310 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.310 "name": "raid_bdev1", 00:16:25.310 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:25.310 "strip_size_kb": 64, 00:16:25.310 "state": "online", 00:16:25.310 "raid_level": "raid5f", 00:16:25.310 "superblock": true, 00:16:25.310 "num_base_bdevs": 3, 00:16:25.310 "num_base_bdevs_discovered": 2, 00:16:25.310 "num_base_bdevs_operational": 2, 00:16:25.310 "base_bdevs_list": [ 00:16:25.310 { 00:16:25.310 "name": null, 00:16:25.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.310 "is_configured": false, 00:16:25.310 "data_offset": 0, 00:16:25.310 "data_size": 63488 00:16:25.310 }, 00:16:25.310 { 00:16:25.310 "name": "BaseBdev2", 00:16:25.310 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:25.310 "is_configured": true, 00:16:25.310 "data_offset": 2048, 00:16:25.310 "data_size": 63488 00:16:25.310 }, 00:16:25.310 { 00:16:25.310 "name": "BaseBdev3", 00:16:25.310 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:25.310 "is_configured": true, 00:16:25.310 "data_offset": 2048, 00:16:25.310 "data_size": 63488 00:16:25.310 } 00:16:25.310 ] 00:16:25.310 }' 00:16:25.310 18:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.310 [2024-11-26 18:01:07.058514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.310 [2024-11-26 18:01:07.058705] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:25.310 [2024-11-26 18:01:07.058730] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:25.310 request: 00:16:25.310 { 00:16:25.310 "base_bdev": "BaseBdev1", 00:16:25.310 "raid_bdev": "raid_bdev1", 00:16:25.310 "method": "bdev_raid_add_base_bdev", 00:16:25.310 "req_id": 1 00:16:25.310 } 00:16:25.310 Got JSON-RPC error response 00:16:25.310 response: 00:16:25.310 { 00:16:25.310 "code": -22, 00:16:25.310 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:25.310 } 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:25.310 18:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:26.246 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:26.246 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.246 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.246 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.246 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.246 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.246 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.246 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.246 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.246 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.246 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.246 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.246 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.246 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.246 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.505 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.505 "name": "raid_bdev1", 00:16:26.505 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:26.505 "strip_size_kb": 64, 00:16:26.505 "state": "online", 00:16:26.505 "raid_level": "raid5f", 00:16:26.505 "superblock": true, 00:16:26.505 "num_base_bdevs": 3, 00:16:26.505 "num_base_bdevs_discovered": 2, 00:16:26.505 "num_base_bdevs_operational": 2, 00:16:26.505 "base_bdevs_list": [ 00:16:26.505 { 00:16:26.505 "name": null, 00:16:26.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.505 "is_configured": false, 00:16:26.505 "data_offset": 0, 00:16:26.505 "data_size": 63488 00:16:26.505 }, 00:16:26.505 { 00:16:26.505 "name": "BaseBdev2", 00:16:26.505 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:26.505 "is_configured": true, 00:16:26.505 "data_offset": 2048, 00:16:26.505 "data_size": 63488 00:16:26.505 }, 00:16:26.505 { 00:16:26.505 "name": "BaseBdev3", 00:16:26.505 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:26.505 "is_configured": true, 00:16:26.505 "data_offset": 2048, 00:16:26.505 "data_size": 63488 00:16:26.505 } 00:16:26.505 ] 00:16:26.505 }' 00:16:26.505 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.505 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.763 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:26.764 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.764 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:26.764 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:26.764 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.764 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.764 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.764 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.764 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.764 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.764 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.764 "name": "raid_bdev1", 00:16:26.764 "uuid": "765b537b-c4c8-43ee-b5a9-5c956e0f7ca0", 00:16:26.764 "strip_size_kb": 64, 00:16:26.764 "state": "online", 00:16:26.764 "raid_level": "raid5f", 00:16:26.764 "superblock": true, 00:16:26.764 "num_base_bdevs": 3, 00:16:26.764 "num_base_bdevs_discovered": 2, 00:16:26.764 "num_base_bdevs_operational": 2, 00:16:26.764 "base_bdevs_list": [ 00:16:26.764 { 00:16:26.764 "name": null, 00:16:26.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.764 "is_configured": false, 00:16:26.764 "data_offset": 0, 00:16:26.764 "data_size": 63488 00:16:26.764 }, 00:16:26.764 { 00:16:26.764 "name": "BaseBdev2", 00:16:26.764 "uuid": "4329ded3-828f-5af5-b63c-9a32f42c398a", 00:16:26.764 "is_configured": true, 00:16:26.764 "data_offset": 2048, 00:16:26.764 "data_size": 63488 00:16:26.764 }, 00:16:26.764 { 00:16:26.764 "name": "BaseBdev3", 00:16:26.764 "uuid": "a4f34304-fb98-5bbc-8460-1ce0f717eed1", 00:16:26.764 "is_configured": true, 00:16:26.764 "data_offset": 2048, 00:16:26.764 "data_size": 63488 00:16:26.764 } 00:16:26.764 ] 00:16:26.764 }' 00:16:26.764 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.023 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.023 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.023 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:27.023 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82423 00:16:27.023 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82423 ']' 00:16:27.023 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82423 00:16:27.023 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:27.023 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.023 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82423 00:16:27.023 killing process with pid 82423 00:16:27.023 Received shutdown signal, test time was about 60.000000 seconds 00:16:27.023 00:16:27.023 Latency(us) 00:16:27.023 [2024-11-26T18:01:08.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.023 [2024-11-26T18:01:08.886Z] =================================================================================================================== 00:16:27.023 [2024-11-26T18:01:08.886Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:27.023 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.023 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.023 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82423' 00:16:27.023 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82423 00:16:27.023 18:01:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82423 00:16:27.023 [2024-11-26 18:01:08.744616] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.023 [2024-11-26 18:01:08.744770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.023 [2024-11-26 18:01:08.744873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.023 [2024-11-26 18:01:08.744904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:27.588 [2024-11-26 18:01:09.185928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.522 18:01:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:28.522 00:16:28.522 real 0m23.973s 00:16:28.522 user 0m30.909s 00:16:28.522 sys 0m2.815s 00:16:28.522 18:01:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.522 18:01:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.522 ************************************ 00:16:28.522 END TEST raid5f_rebuild_test_sb 00:16:28.522 ************************************ 00:16:28.861 18:01:10 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:28.861 18:01:10 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:28.861 18:01:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:28.861 18:01:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.861 18:01:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.861 ************************************ 00:16:28.861 START TEST raid5f_state_function_test 00:16:28.861 ************************************ 00:16:28.861 18:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:28.861 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:28.861 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:28.861 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:28.861 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:28.861 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:28.861 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.861 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:28.861 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.861 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.861 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:28.861 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.861 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83177 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:28.862 Process raid pid: 83177 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83177' 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83177 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83177 ']' 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.862 18:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.862 [2024-11-26 18:01:10.547487] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:16:28.862 [2024-11-26 18:01:10.547619] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.120 [2024-11-26 18:01:10.725289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.120 [2024-11-26 18:01:10.854113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.379 [2024-11-26 18:01:11.067588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.379 [2024-11-26 18:01:11.067638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.640 [2024-11-26 18:01:11.424439] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.640 [2024-11-26 18:01:11.424493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.640 [2024-11-26 18:01:11.424504] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.640 [2024-11-26 18:01:11.424513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.640 [2024-11-26 18:01:11.424519] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.640 [2024-11-26 18:01:11.424528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.640 [2024-11-26 18:01:11.424534] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:29.640 [2024-11-26 18:01:11.424543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.640 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.640 "name": "Existed_Raid", 00:16:29.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.640 "strip_size_kb": 64, 00:16:29.640 "state": "configuring", 00:16:29.640 "raid_level": "raid5f", 00:16:29.640 "superblock": false, 00:16:29.640 "num_base_bdevs": 4, 00:16:29.640 "num_base_bdevs_discovered": 0, 00:16:29.640 "num_base_bdevs_operational": 4, 00:16:29.640 "base_bdevs_list": [ 00:16:29.640 { 00:16:29.640 "name": "BaseBdev1", 00:16:29.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.640 "is_configured": false, 00:16:29.640 "data_offset": 0, 00:16:29.640 "data_size": 0 00:16:29.640 }, 00:16:29.640 { 00:16:29.640 "name": "BaseBdev2", 00:16:29.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.640 "is_configured": false, 00:16:29.640 "data_offset": 0, 00:16:29.640 "data_size": 0 00:16:29.640 }, 00:16:29.640 { 00:16:29.640 "name": "BaseBdev3", 00:16:29.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.640 "is_configured": false, 00:16:29.640 "data_offset": 0, 00:16:29.640 "data_size": 0 00:16:29.640 }, 00:16:29.640 { 00:16:29.640 "name": "BaseBdev4", 00:16:29.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.640 "is_configured": false, 00:16:29.640 "data_offset": 0, 00:16:29.640 "data_size": 0 00:16:29.640 } 00:16:29.640 ] 00:16:29.640 }' 00:16:29.641 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.641 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.210 [2024-11-26 18:01:11.855689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.210 [2024-11-26 18:01:11.855741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.210 [2024-11-26 18:01:11.867647] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.210 [2024-11-26 18:01:11.867696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.210 [2024-11-26 18:01:11.867705] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.210 [2024-11-26 18:01:11.867715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.210 [2024-11-26 18:01:11.867721] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:30.210 [2024-11-26 18:01:11.867729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:30.210 [2024-11-26 18:01:11.867736] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:30.210 [2024-11-26 18:01:11.867744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.210 [2024-11-26 18:01:11.917662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.210 BaseBdev1 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.210 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.211 [ 00:16:30.211 { 00:16:30.211 "name": "BaseBdev1", 00:16:30.211 "aliases": [ 00:16:30.211 "03b35e2a-434d-4b29-af68-949c114577a0" 00:16:30.211 ], 00:16:30.211 "product_name": "Malloc disk", 00:16:30.211 "block_size": 512, 00:16:30.211 "num_blocks": 65536, 00:16:30.211 "uuid": "03b35e2a-434d-4b29-af68-949c114577a0", 00:16:30.211 "assigned_rate_limits": { 00:16:30.211 "rw_ios_per_sec": 0, 00:16:30.211 "rw_mbytes_per_sec": 0, 00:16:30.211 "r_mbytes_per_sec": 0, 00:16:30.211 "w_mbytes_per_sec": 0 00:16:30.211 }, 00:16:30.211 "claimed": true, 00:16:30.211 "claim_type": "exclusive_write", 00:16:30.211 "zoned": false, 00:16:30.211 "supported_io_types": { 00:16:30.211 "read": true, 00:16:30.211 "write": true, 00:16:30.211 "unmap": true, 00:16:30.211 "flush": true, 00:16:30.211 "reset": true, 00:16:30.211 "nvme_admin": false, 00:16:30.211 "nvme_io": false, 00:16:30.211 "nvme_io_md": false, 00:16:30.211 "write_zeroes": true, 00:16:30.211 "zcopy": true, 00:16:30.211 "get_zone_info": false, 00:16:30.211 "zone_management": false, 00:16:30.211 "zone_append": false, 00:16:30.211 "compare": false, 00:16:30.211 "compare_and_write": false, 00:16:30.211 "abort": true, 00:16:30.211 "seek_hole": false, 00:16:30.211 "seek_data": false, 00:16:30.211 "copy": true, 00:16:30.211 "nvme_iov_md": false 00:16:30.211 }, 00:16:30.211 "memory_domains": [ 00:16:30.211 { 00:16:30.211 "dma_device_id": "system", 00:16:30.211 "dma_device_type": 1 00:16:30.211 }, 00:16:30.211 { 00:16:30.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.211 "dma_device_type": 2 00:16:30.211 } 00:16:30.211 ], 00:16:30.211 "driver_specific": {} 00:16:30.211 } 00:16:30.211 ] 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.211 "name": "Existed_Raid", 00:16:30.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.211 "strip_size_kb": 64, 00:16:30.211 "state": "configuring", 00:16:30.211 "raid_level": "raid5f", 00:16:30.211 "superblock": false, 00:16:30.211 "num_base_bdevs": 4, 00:16:30.211 "num_base_bdevs_discovered": 1, 00:16:30.211 "num_base_bdevs_operational": 4, 00:16:30.211 "base_bdevs_list": [ 00:16:30.211 { 00:16:30.211 "name": "BaseBdev1", 00:16:30.211 "uuid": "03b35e2a-434d-4b29-af68-949c114577a0", 00:16:30.211 "is_configured": true, 00:16:30.211 "data_offset": 0, 00:16:30.211 "data_size": 65536 00:16:30.211 }, 00:16:30.211 { 00:16:30.211 "name": "BaseBdev2", 00:16:30.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.211 "is_configured": false, 00:16:30.211 "data_offset": 0, 00:16:30.211 "data_size": 0 00:16:30.211 }, 00:16:30.211 { 00:16:30.211 "name": "BaseBdev3", 00:16:30.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.211 "is_configured": false, 00:16:30.211 "data_offset": 0, 00:16:30.211 "data_size": 0 00:16:30.211 }, 00:16:30.211 { 00:16:30.211 "name": "BaseBdev4", 00:16:30.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.211 "is_configured": false, 00:16:30.211 "data_offset": 0, 00:16:30.211 "data_size": 0 00:16:30.211 } 00:16:30.211 ] 00:16:30.211 }' 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.211 18:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.781 [2024-11-26 18:01:12.392932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.781 [2024-11-26 18:01:12.392995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.781 [2024-11-26 18:01:12.400958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.781 [2024-11-26 18:01:12.403001] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.781 [2024-11-26 18:01:12.403070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.781 [2024-11-26 18:01:12.403081] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:30.781 [2024-11-26 18:01:12.403092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:30.781 [2024-11-26 18:01:12.403099] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:30.781 [2024-11-26 18:01:12.403108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.781 "name": "Existed_Raid", 00:16:30.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.781 "strip_size_kb": 64, 00:16:30.781 "state": "configuring", 00:16:30.781 "raid_level": "raid5f", 00:16:30.781 "superblock": false, 00:16:30.781 "num_base_bdevs": 4, 00:16:30.781 "num_base_bdevs_discovered": 1, 00:16:30.781 "num_base_bdevs_operational": 4, 00:16:30.781 "base_bdevs_list": [ 00:16:30.781 { 00:16:30.781 "name": "BaseBdev1", 00:16:30.781 "uuid": "03b35e2a-434d-4b29-af68-949c114577a0", 00:16:30.781 "is_configured": true, 00:16:30.781 "data_offset": 0, 00:16:30.781 "data_size": 65536 00:16:30.781 }, 00:16:30.781 { 00:16:30.781 "name": "BaseBdev2", 00:16:30.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.781 "is_configured": false, 00:16:30.781 "data_offset": 0, 00:16:30.781 "data_size": 0 00:16:30.781 }, 00:16:30.781 { 00:16:30.781 "name": "BaseBdev3", 00:16:30.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.781 "is_configured": false, 00:16:30.781 "data_offset": 0, 00:16:30.781 "data_size": 0 00:16:30.781 }, 00:16:30.781 { 00:16:30.781 "name": "BaseBdev4", 00:16:30.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.781 "is_configured": false, 00:16:30.781 "data_offset": 0, 00:16:30.781 "data_size": 0 00:16:30.781 } 00:16:30.781 ] 00:16:30.781 }' 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.781 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.042 [2024-11-26 18:01:12.885110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.042 BaseBdev2 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.042 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.303 [ 00:16:31.303 { 00:16:31.303 "name": "BaseBdev2", 00:16:31.303 "aliases": [ 00:16:31.303 "a8b6b858-a8e3-4f85-b35d-7db5fd91b77c" 00:16:31.303 ], 00:16:31.303 "product_name": "Malloc disk", 00:16:31.303 "block_size": 512, 00:16:31.303 "num_blocks": 65536, 00:16:31.303 "uuid": "a8b6b858-a8e3-4f85-b35d-7db5fd91b77c", 00:16:31.303 "assigned_rate_limits": { 00:16:31.303 "rw_ios_per_sec": 0, 00:16:31.303 "rw_mbytes_per_sec": 0, 00:16:31.303 "r_mbytes_per_sec": 0, 00:16:31.303 "w_mbytes_per_sec": 0 00:16:31.303 }, 00:16:31.303 "claimed": true, 00:16:31.303 "claim_type": "exclusive_write", 00:16:31.303 "zoned": false, 00:16:31.303 "supported_io_types": { 00:16:31.303 "read": true, 00:16:31.303 "write": true, 00:16:31.303 "unmap": true, 00:16:31.303 "flush": true, 00:16:31.303 "reset": true, 00:16:31.303 "nvme_admin": false, 00:16:31.303 "nvme_io": false, 00:16:31.304 "nvme_io_md": false, 00:16:31.304 "write_zeroes": true, 00:16:31.304 "zcopy": true, 00:16:31.304 "get_zone_info": false, 00:16:31.304 "zone_management": false, 00:16:31.304 "zone_append": false, 00:16:31.304 "compare": false, 00:16:31.304 "compare_and_write": false, 00:16:31.304 "abort": true, 00:16:31.304 "seek_hole": false, 00:16:31.304 "seek_data": false, 00:16:31.304 "copy": true, 00:16:31.304 "nvme_iov_md": false 00:16:31.304 }, 00:16:31.304 "memory_domains": [ 00:16:31.304 { 00:16:31.304 "dma_device_id": "system", 00:16:31.304 "dma_device_type": 1 00:16:31.304 }, 00:16:31.304 { 00:16:31.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.304 "dma_device_type": 2 00:16:31.304 } 00:16:31.304 ], 00:16:31.304 "driver_specific": {} 00:16:31.304 } 00:16:31.304 ] 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.304 "name": "Existed_Raid", 00:16:31.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.304 "strip_size_kb": 64, 00:16:31.304 "state": "configuring", 00:16:31.304 "raid_level": "raid5f", 00:16:31.304 "superblock": false, 00:16:31.304 "num_base_bdevs": 4, 00:16:31.304 "num_base_bdevs_discovered": 2, 00:16:31.304 "num_base_bdevs_operational": 4, 00:16:31.304 "base_bdevs_list": [ 00:16:31.304 { 00:16:31.304 "name": "BaseBdev1", 00:16:31.304 "uuid": "03b35e2a-434d-4b29-af68-949c114577a0", 00:16:31.304 "is_configured": true, 00:16:31.304 "data_offset": 0, 00:16:31.304 "data_size": 65536 00:16:31.304 }, 00:16:31.304 { 00:16:31.304 "name": "BaseBdev2", 00:16:31.304 "uuid": "a8b6b858-a8e3-4f85-b35d-7db5fd91b77c", 00:16:31.304 "is_configured": true, 00:16:31.304 "data_offset": 0, 00:16:31.304 "data_size": 65536 00:16:31.304 }, 00:16:31.304 { 00:16:31.304 "name": "BaseBdev3", 00:16:31.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.304 "is_configured": false, 00:16:31.304 "data_offset": 0, 00:16:31.304 "data_size": 0 00:16:31.304 }, 00:16:31.304 { 00:16:31.304 "name": "BaseBdev4", 00:16:31.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.304 "is_configured": false, 00:16:31.304 "data_offset": 0, 00:16:31.304 "data_size": 0 00:16:31.304 } 00:16:31.304 ] 00:16:31.304 }' 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.304 18:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.563 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:31.563 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.563 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.821 [2024-11-26 18:01:13.441950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.821 BaseBdev3 00:16:31.821 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.821 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:31.821 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:31.821 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.821 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:31.821 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.821 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.821 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.821 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.821 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.821 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.821 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:31.821 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.822 [ 00:16:31.822 { 00:16:31.822 "name": "BaseBdev3", 00:16:31.822 "aliases": [ 00:16:31.822 "c21758d3-f3a8-4554-aebc-7c9851626b98" 00:16:31.822 ], 00:16:31.822 "product_name": "Malloc disk", 00:16:31.822 "block_size": 512, 00:16:31.822 "num_blocks": 65536, 00:16:31.822 "uuid": "c21758d3-f3a8-4554-aebc-7c9851626b98", 00:16:31.822 "assigned_rate_limits": { 00:16:31.822 "rw_ios_per_sec": 0, 00:16:31.822 "rw_mbytes_per_sec": 0, 00:16:31.822 "r_mbytes_per_sec": 0, 00:16:31.822 "w_mbytes_per_sec": 0 00:16:31.822 }, 00:16:31.822 "claimed": true, 00:16:31.822 "claim_type": "exclusive_write", 00:16:31.822 "zoned": false, 00:16:31.822 "supported_io_types": { 00:16:31.822 "read": true, 00:16:31.822 "write": true, 00:16:31.822 "unmap": true, 00:16:31.822 "flush": true, 00:16:31.822 "reset": true, 00:16:31.822 "nvme_admin": false, 00:16:31.822 "nvme_io": false, 00:16:31.822 "nvme_io_md": false, 00:16:31.822 "write_zeroes": true, 00:16:31.822 "zcopy": true, 00:16:31.822 "get_zone_info": false, 00:16:31.822 "zone_management": false, 00:16:31.822 "zone_append": false, 00:16:31.822 "compare": false, 00:16:31.822 "compare_and_write": false, 00:16:31.822 "abort": true, 00:16:31.822 "seek_hole": false, 00:16:31.822 "seek_data": false, 00:16:31.822 "copy": true, 00:16:31.822 "nvme_iov_md": false 00:16:31.822 }, 00:16:31.822 "memory_domains": [ 00:16:31.822 { 00:16:31.822 "dma_device_id": "system", 00:16:31.822 "dma_device_type": 1 00:16:31.822 }, 00:16:31.822 { 00:16:31.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.822 "dma_device_type": 2 00:16:31.822 } 00:16:31.822 ], 00:16:31.822 "driver_specific": {} 00:16:31.822 } 00:16:31.822 ] 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.822 "name": "Existed_Raid", 00:16:31.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.822 "strip_size_kb": 64, 00:16:31.822 "state": "configuring", 00:16:31.822 "raid_level": "raid5f", 00:16:31.822 "superblock": false, 00:16:31.822 "num_base_bdevs": 4, 00:16:31.822 "num_base_bdevs_discovered": 3, 00:16:31.822 "num_base_bdevs_operational": 4, 00:16:31.822 "base_bdevs_list": [ 00:16:31.822 { 00:16:31.822 "name": "BaseBdev1", 00:16:31.822 "uuid": "03b35e2a-434d-4b29-af68-949c114577a0", 00:16:31.822 "is_configured": true, 00:16:31.822 "data_offset": 0, 00:16:31.822 "data_size": 65536 00:16:31.822 }, 00:16:31.822 { 00:16:31.822 "name": "BaseBdev2", 00:16:31.822 "uuid": "a8b6b858-a8e3-4f85-b35d-7db5fd91b77c", 00:16:31.822 "is_configured": true, 00:16:31.822 "data_offset": 0, 00:16:31.822 "data_size": 65536 00:16:31.822 }, 00:16:31.822 { 00:16:31.822 "name": "BaseBdev3", 00:16:31.822 "uuid": "c21758d3-f3a8-4554-aebc-7c9851626b98", 00:16:31.822 "is_configured": true, 00:16:31.822 "data_offset": 0, 00:16:31.822 "data_size": 65536 00:16:31.822 }, 00:16:31.822 { 00:16:31.822 "name": "BaseBdev4", 00:16:31.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.822 "is_configured": false, 00:16:31.822 "data_offset": 0, 00:16:31.822 "data_size": 0 00:16:31.822 } 00:16:31.822 ] 00:16:31.822 }' 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.822 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.083 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:32.083 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.083 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.343 [2024-11-26 18:01:13.967575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:32.343 [2024-11-26 18:01:13.967646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:32.343 [2024-11-26 18:01:13.967656] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:32.343 [2024-11-26 18:01:13.967927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:32.343 [2024-11-26 18:01:13.975392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:32.343 [2024-11-26 18:01:13.975420] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:32.343 [2024-11-26 18:01:13.975704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.343 BaseBdev4 00:16:32.343 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.343 18:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:32.343 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:32.344 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.344 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:32.344 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.344 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.344 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.344 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.344 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.344 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.344 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:32.344 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.344 18:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.344 [ 00:16:32.344 { 00:16:32.344 "name": "BaseBdev4", 00:16:32.344 "aliases": [ 00:16:32.344 "88f5fcd0-8bc9-40e9-84db-692ce572c727" 00:16:32.344 ], 00:16:32.344 "product_name": "Malloc disk", 00:16:32.344 "block_size": 512, 00:16:32.344 "num_blocks": 65536, 00:16:32.344 "uuid": "88f5fcd0-8bc9-40e9-84db-692ce572c727", 00:16:32.344 "assigned_rate_limits": { 00:16:32.344 "rw_ios_per_sec": 0, 00:16:32.344 "rw_mbytes_per_sec": 0, 00:16:32.344 "r_mbytes_per_sec": 0, 00:16:32.344 "w_mbytes_per_sec": 0 00:16:32.344 }, 00:16:32.344 "claimed": true, 00:16:32.344 "claim_type": "exclusive_write", 00:16:32.344 "zoned": false, 00:16:32.344 "supported_io_types": { 00:16:32.344 "read": true, 00:16:32.344 "write": true, 00:16:32.344 "unmap": true, 00:16:32.344 "flush": true, 00:16:32.344 "reset": true, 00:16:32.344 "nvme_admin": false, 00:16:32.344 "nvme_io": false, 00:16:32.344 "nvme_io_md": false, 00:16:32.344 "write_zeroes": true, 00:16:32.344 "zcopy": true, 00:16:32.344 "get_zone_info": false, 00:16:32.344 "zone_management": false, 00:16:32.344 "zone_append": false, 00:16:32.344 "compare": false, 00:16:32.344 "compare_and_write": false, 00:16:32.344 "abort": true, 00:16:32.344 "seek_hole": false, 00:16:32.344 "seek_data": false, 00:16:32.344 "copy": true, 00:16:32.344 "nvme_iov_md": false 00:16:32.344 }, 00:16:32.344 "memory_domains": [ 00:16:32.344 { 00:16:32.344 "dma_device_id": "system", 00:16:32.344 "dma_device_type": 1 00:16:32.344 }, 00:16:32.344 { 00:16:32.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.344 "dma_device_type": 2 00:16:32.344 } 00:16:32.344 ], 00:16:32.344 "driver_specific": {} 00:16:32.344 } 00:16:32.344 ] 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.344 "name": "Existed_Raid", 00:16:32.344 "uuid": "ebe9e7ce-b70d-428b-9d4b-5f97e87e966f", 00:16:32.344 "strip_size_kb": 64, 00:16:32.344 "state": "online", 00:16:32.344 "raid_level": "raid5f", 00:16:32.344 "superblock": false, 00:16:32.344 "num_base_bdevs": 4, 00:16:32.344 "num_base_bdevs_discovered": 4, 00:16:32.344 "num_base_bdevs_operational": 4, 00:16:32.344 "base_bdevs_list": [ 00:16:32.344 { 00:16:32.344 "name": "BaseBdev1", 00:16:32.344 "uuid": "03b35e2a-434d-4b29-af68-949c114577a0", 00:16:32.344 "is_configured": true, 00:16:32.344 "data_offset": 0, 00:16:32.344 "data_size": 65536 00:16:32.344 }, 00:16:32.344 { 00:16:32.344 "name": "BaseBdev2", 00:16:32.344 "uuid": "a8b6b858-a8e3-4f85-b35d-7db5fd91b77c", 00:16:32.344 "is_configured": true, 00:16:32.344 "data_offset": 0, 00:16:32.344 "data_size": 65536 00:16:32.344 }, 00:16:32.344 { 00:16:32.344 "name": "BaseBdev3", 00:16:32.344 "uuid": "c21758d3-f3a8-4554-aebc-7c9851626b98", 00:16:32.344 "is_configured": true, 00:16:32.344 "data_offset": 0, 00:16:32.344 "data_size": 65536 00:16:32.344 }, 00:16:32.344 { 00:16:32.344 "name": "BaseBdev4", 00:16:32.344 "uuid": "88f5fcd0-8bc9-40e9-84db-692ce572c727", 00:16:32.344 "is_configured": true, 00:16:32.344 "data_offset": 0, 00:16:32.344 "data_size": 65536 00:16:32.344 } 00:16:32.344 ] 00:16:32.344 }' 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.344 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.604 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:32.604 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:32.604 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:32.865 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:32.865 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:32.865 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:32.865 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:32.865 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.865 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:32.865 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.865 [2024-11-26 18:01:14.475400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.865 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.865 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:32.865 "name": "Existed_Raid", 00:16:32.865 "aliases": [ 00:16:32.865 "ebe9e7ce-b70d-428b-9d4b-5f97e87e966f" 00:16:32.865 ], 00:16:32.865 "product_name": "Raid Volume", 00:16:32.865 "block_size": 512, 00:16:32.865 "num_blocks": 196608, 00:16:32.865 "uuid": "ebe9e7ce-b70d-428b-9d4b-5f97e87e966f", 00:16:32.865 "assigned_rate_limits": { 00:16:32.865 "rw_ios_per_sec": 0, 00:16:32.865 "rw_mbytes_per_sec": 0, 00:16:32.865 "r_mbytes_per_sec": 0, 00:16:32.865 "w_mbytes_per_sec": 0 00:16:32.865 }, 00:16:32.865 "claimed": false, 00:16:32.865 "zoned": false, 00:16:32.865 "supported_io_types": { 00:16:32.865 "read": true, 00:16:32.865 "write": true, 00:16:32.865 "unmap": false, 00:16:32.865 "flush": false, 00:16:32.865 "reset": true, 00:16:32.865 "nvme_admin": false, 00:16:32.865 "nvme_io": false, 00:16:32.865 "nvme_io_md": false, 00:16:32.865 "write_zeroes": true, 00:16:32.865 "zcopy": false, 00:16:32.865 "get_zone_info": false, 00:16:32.865 "zone_management": false, 00:16:32.865 "zone_append": false, 00:16:32.865 "compare": false, 00:16:32.865 "compare_and_write": false, 00:16:32.865 "abort": false, 00:16:32.865 "seek_hole": false, 00:16:32.865 "seek_data": false, 00:16:32.865 "copy": false, 00:16:32.865 "nvme_iov_md": false 00:16:32.865 }, 00:16:32.865 "driver_specific": { 00:16:32.865 "raid": { 00:16:32.865 "uuid": "ebe9e7ce-b70d-428b-9d4b-5f97e87e966f", 00:16:32.865 "strip_size_kb": 64, 00:16:32.865 "state": "online", 00:16:32.865 "raid_level": "raid5f", 00:16:32.865 "superblock": false, 00:16:32.865 "num_base_bdevs": 4, 00:16:32.865 "num_base_bdevs_discovered": 4, 00:16:32.865 "num_base_bdevs_operational": 4, 00:16:32.865 "base_bdevs_list": [ 00:16:32.865 { 00:16:32.865 "name": "BaseBdev1", 00:16:32.865 "uuid": "03b35e2a-434d-4b29-af68-949c114577a0", 00:16:32.865 "is_configured": true, 00:16:32.865 "data_offset": 0, 00:16:32.865 "data_size": 65536 00:16:32.865 }, 00:16:32.865 { 00:16:32.865 "name": "BaseBdev2", 00:16:32.865 "uuid": "a8b6b858-a8e3-4f85-b35d-7db5fd91b77c", 00:16:32.865 "is_configured": true, 00:16:32.865 "data_offset": 0, 00:16:32.865 "data_size": 65536 00:16:32.866 }, 00:16:32.866 { 00:16:32.866 "name": "BaseBdev3", 00:16:32.866 "uuid": "c21758d3-f3a8-4554-aebc-7c9851626b98", 00:16:32.866 "is_configured": true, 00:16:32.866 "data_offset": 0, 00:16:32.866 "data_size": 65536 00:16:32.866 }, 00:16:32.866 { 00:16:32.866 "name": "BaseBdev4", 00:16:32.866 "uuid": "88f5fcd0-8bc9-40e9-84db-692ce572c727", 00:16:32.866 "is_configured": true, 00:16:32.866 "data_offset": 0, 00:16:32.866 "data_size": 65536 00:16:32.866 } 00:16:32.866 ] 00:16:32.866 } 00:16:32.866 } 00:16:32.866 }' 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:32.866 BaseBdev2 00:16:32.866 BaseBdev3 00:16:32.866 BaseBdev4' 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.866 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.126 [2024-11-26 18:01:14.762720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.126 "name": "Existed_Raid", 00:16:33.126 "uuid": "ebe9e7ce-b70d-428b-9d4b-5f97e87e966f", 00:16:33.126 "strip_size_kb": 64, 00:16:33.126 "state": "online", 00:16:33.126 "raid_level": "raid5f", 00:16:33.126 "superblock": false, 00:16:33.126 "num_base_bdevs": 4, 00:16:33.126 "num_base_bdevs_discovered": 3, 00:16:33.126 "num_base_bdevs_operational": 3, 00:16:33.126 "base_bdevs_list": [ 00:16:33.126 { 00:16:33.126 "name": null, 00:16:33.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.126 "is_configured": false, 00:16:33.126 "data_offset": 0, 00:16:33.126 "data_size": 65536 00:16:33.126 }, 00:16:33.126 { 00:16:33.126 "name": "BaseBdev2", 00:16:33.126 "uuid": "a8b6b858-a8e3-4f85-b35d-7db5fd91b77c", 00:16:33.126 "is_configured": true, 00:16:33.126 "data_offset": 0, 00:16:33.126 "data_size": 65536 00:16:33.126 }, 00:16:33.126 { 00:16:33.126 "name": "BaseBdev3", 00:16:33.126 "uuid": "c21758d3-f3a8-4554-aebc-7c9851626b98", 00:16:33.126 "is_configured": true, 00:16:33.126 "data_offset": 0, 00:16:33.126 "data_size": 65536 00:16:33.126 }, 00:16:33.126 { 00:16:33.126 "name": "BaseBdev4", 00:16:33.126 "uuid": "88f5fcd0-8bc9-40e9-84db-692ce572c727", 00:16:33.126 "is_configured": true, 00:16:33.126 "data_offset": 0, 00:16:33.126 "data_size": 65536 00:16:33.126 } 00:16:33.126 ] 00:16:33.126 }' 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.126 18:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.695 [2024-11-26 18:01:15.346255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:33.695 [2024-11-26 18:01:15.346379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.695 [2024-11-26 18:01:15.448812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.695 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.695 [2024-11-26 18:01:15.504763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.954 [2024-11-26 18:01:15.661997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:33.954 [2024-11-26 18:01:15.662084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.954 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.214 BaseBdev2 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.214 [ 00:16:34.214 { 00:16:34.214 "name": "BaseBdev2", 00:16:34.214 "aliases": [ 00:16:34.214 "b162d4e0-3a4e-4838-a059-7687c71685d1" 00:16:34.214 ], 00:16:34.214 "product_name": "Malloc disk", 00:16:34.214 "block_size": 512, 00:16:34.214 "num_blocks": 65536, 00:16:34.214 "uuid": "b162d4e0-3a4e-4838-a059-7687c71685d1", 00:16:34.214 "assigned_rate_limits": { 00:16:34.214 "rw_ios_per_sec": 0, 00:16:34.214 "rw_mbytes_per_sec": 0, 00:16:34.214 "r_mbytes_per_sec": 0, 00:16:34.214 "w_mbytes_per_sec": 0 00:16:34.214 }, 00:16:34.214 "claimed": false, 00:16:34.214 "zoned": false, 00:16:34.214 "supported_io_types": { 00:16:34.214 "read": true, 00:16:34.214 "write": true, 00:16:34.214 "unmap": true, 00:16:34.214 "flush": true, 00:16:34.214 "reset": true, 00:16:34.214 "nvme_admin": false, 00:16:34.214 "nvme_io": false, 00:16:34.214 "nvme_io_md": false, 00:16:34.214 "write_zeroes": true, 00:16:34.214 "zcopy": true, 00:16:34.214 "get_zone_info": false, 00:16:34.214 "zone_management": false, 00:16:34.214 "zone_append": false, 00:16:34.214 "compare": false, 00:16:34.214 "compare_and_write": false, 00:16:34.214 "abort": true, 00:16:34.214 "seek_hole": false, 00:16:34.214 "seek_data": false, 00:16:34.214 "copy": true, 00:16:34.214 "nvme_iov_md": false 00:16:34.214 }, 00:16:34.214 "memory_domains": [ 00:16:34.214 { 00:16:34.214 "dma_device_id": "system", 00:16:34.214 "dma_device_type": 1 00:16:34.214 }, 00:16:34.214 { 00:16:34.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.214 "dma_device_type": 2 00:16:34.214 } 00:16:34.214 ], 00:16:34.214 "driver_specific": {} 00:16:34.214 } 00:16:34.214 ] 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.214 BaseBdev3 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.214 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.214 [ 00:16:34.214 { 00:16:34.214 "name": "BaseBdev3", 00:16:34.214 "aliases": [ 00:16:34.214 "73be6a8e-c5e0-4555-9d05-4c55f6679645" 00:16:34.214 ], 00:16:34.215 "product_name": "Malloc disk", 00:16:34.215 "block_size": 512, 00:16:34.215 "num_blocks": 65536, 00:16:34.215 "uuid": "73be6a8e-c5e0-4555-9d05-4c55f6679645", 00:16:34.215 "assigned_rate_limits": { 00:16:34.215 "rw_ios_per_sec": 0, 00:16:34.215 "rw_mbytes_per_sec": 0, 00:16:34.215 "r_mbytes_per_sec": 0, 00:16:34.215 "w_mbytes_per_sec": 0 00:16:34.215 }, 00:16:34.215 "claimed": false, 00:16:34.215 "zoned": false, 00:16:34.215 "supported_io_types": { 00:16:34.215 "read": true, 00:16:34.215 "write": true, 00:16:34.215 "unmap": true, 00:16:34.215 "flush": true, 00:16:34.215 "reset": true, 00:16:34.215 "nvme_admin": false, 00:16:34.215 "nvme_io": false, 00:16:34.215 "nvme_io_md": false, 00:16:34.215 "write_zeroes": true, 00:16:34.215 "zcopy": true, 00:16:34.215 "get_zone_info": false, 00:16:34.215 "zone_management": false, 00:16:34.215 "zone_append": false, 00:16:34.215 "compare": false, 00:16:34.215 "compare_and_write": false, 00:16:34.215 "abort": true, 00:16:34.215 "seek_hole": false, 00:16:34.215 "seek_data": false, 00:16:34.215 "copy": true, 00:16:34.215 "nvme_iov_md": false 00:16:34.215 }, 00:16:34.215 "memory_domains": [ 00:16:34.215 { 00:16:34.215 "dma_device_id": "system", 00:16:34.215 "dma_device_type": 1 00:16:34.215 }, 00:16:34.215 { 00:16:34.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.215 "dma_device_type": 2 00:16:34.215 } 00:16:34.215 ], 00:16:34.215 "driver_specific": {} 00:16:34.215 } 00:16:34.215 ] 00:16:34.215 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.215 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:34.215 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:34.215 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:34.215 18:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:34.215 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.215 18:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.215 BaseBdev4 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.215 [ 00:16:34.215 { 00:16:34.215 "name": "BaseBdev4", 00:16:34.215 "aliases": [ 00:16:34.215 "03f28f95-606e-42e6-a359-c969f742d60d" 00:16:34.215 ], 00:16:34.215 "product_name": "Malloc disk", 00:16:34.215 "block_size": 512, 00:16:34.215 "num_blocks": 65536, 00:16:34.215 "uuid": "03f28f95-606e-42e6-a359-c969f742d60d", 00:16:34.215 "assigned_rate_limits": { 00:16:34.215 "rw_ios_per_sec": 0, 00:16:34.215 "rw_mbytes_per_sec": 0, 00:16:34.215 "r_mbytes_per_sec": 0, 00:16:34.215 "w_mbytes_per_sec": 0 00:16:34.215 }, 00:16:34.215 "claimed": false, 00:16:34.215 "zoned": false, 00:16:34.215 "supported_io_types": { 00:16:34.215 "read": true, 00:16:34.215 "write": true, 00:16:34.215 "unmap": true, 00:16:34.215 "flush": true, 00:16:34.215 "reset": true, 00:16:34.215 "nvme_admin": false, 00:16:34.215 "nvme_io": false, 00:16:34.215 "nvme_io_md": false, 00:16:34.215 "write_zeroes": true, 00:16:34.215 "zcopy": true, 00:16:34.215 "get_zone_info": false, 00:16:34.215 "zone_management": false, 00:16:34.215 "zone_append": false, 00:16:34.215 "compare": false, 00:16:34.215 "compare_and_write": false, 00:16:34.215 "abort": true, 00:16:34.215 "seek_hole": false, 00:16:34.215 "seek_data": false, 00:16:34.215 "copy": true, 00:16:34.215 "nvme_iov_md": false 00:16:34.215 }, 00:16:34.215 "memory_domains": [ 00:16:34.215 { 00:16:34.215 "dma_device_id": "system", 00:16:34.215 "dma_device_type": 1 00:16:34.215 }, 00:16:34.215 { 00:16:34.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.215 "dma_device_type": 2 00:16:34.215 } 00:16:34.215 ], 00:16:34.215 "driver_specific": {} 00:16:34.215 } 00:16:34.215 ] 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.215 [2024-11-26 18:01:16.062495] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.215 [2024-11-26 18:01:16.062604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.215 [2024-11-26 18:01:16.062667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.215 [2024-11-26 18:01:16.064805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:34.215 [2024-11-26 18:01:16.064930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.215 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.474 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.474 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.474 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.474 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.474 "name": "Existed_Raid", 00:16:34.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.474 "strip_size_kb": 64, 00:16:34.474 "state": "configuring", 00:16:34.474 "raid_level": "raid5f", 00:16:34.474 "superblock": false, 00:16:34.474 "num_base_bdevs": 4, 00:16:34.474 "num_base_bdevs_discovered": 3, 00:16:34.474 "num_base_bdevs_operational": 4, 00:16:34.474 "base_bdevs_list": [ 00:16:34.474 { 00:16:34.474 "name": "BaseBdev1", 00:16:34.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.474 "is_configured": false, 00:16:34.474 "data_offset": 0, 00:16:34.474 "data_size": 0 00:16:34.474 }, 00:16:34.474 { 00:16:34.474 "name": "BaseBdev2", 00:16:34.474 "uuid": "b162d4e0-3a4e-4838-a059-7687c71685d1", 00:16:34.474 "is_configured": true, 00:16:34.474 "data_offset": 0, 00:16:34.474 "data_size": 65536 00:16:34.474 }, 00:16:34.474 { 00:16:34.474 "name": "BaseBdev3", 00:16:34.474 "uuid": "73be6a8e-c5e0-4555-9d05-4c55f6679645", 00:16:34.474 "is_configured": true, 00:16:34.474 "data_offset": 0, 00:16:34.474 "data_size": 65536 00:16:34.474 }, 00:16:34.474 { 00:16:34.474 "name": "BaseBdev4", 00:16:34.474 "uuid": "03f28f95-606e-42e6-a359-c969f742d60d", 00:16:34.474 "is_configured": true, 00:16:34.474 "data_offset": 0, 00:16:34.474 "data_size": 65536 00:16:34.474 } 00:16:34.474 ] 00:16:34.474 }' 00:16:34.474 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.474 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.733 [2024-11-26 18:01:16.501756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.733 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.734 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.734 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.734 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.734 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.734 "name": "Existed_Raid", 00:16:34.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.734 "strip_size_kb": 64, 00:16:34.734 "state": "configuring", 00:16:34.734 "raid_level": "raid5f", 00:16:34.734 "superblock": false, 00:16:34.734 "num_base_bdevs": 4, 00:16:34.734 "num_base_bdevs_discovered": 2, 00:16:34.734 "num_base_bdevs_operational": 4, 00:16:34.734 "base_bdevs_list": [ 00:16:34.734 { 00:16:34.734 "name": "BaseBdev1", 00:16:34.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.734 "is_configured": false, 00:16:34.734 "data_offset": 0, 00:16:34.734 "data_size": 0 00:16:34.734 }, 00:16:34.734 { 00:16:34.734 "name": null, 00:16:34.734 "uuid": "b162d4e0-3a4e-4838-a059-7687c71685d1", 00:16:34.734 "is_configured": false, 00:16:34.734 "data_offset": 0, 00:16:34.734 "data_size": 65536 00:16:34.734 }, 00:16:34.734 { 00:16:34.734 "name": "BaseBdev3", 00:16:34.734 "uuid": "73be6a8e-c5e0-4555-9d05-4c55f6679645", 00:16:34.734 "is_configured": true, 00:16:34.734 "data_offset": 0, 00:16:34.734 "data_size": 65536 00:16:34.734 }, 00:16:34.734 { 00:16:34.734 "name": "BaseBdev4", 00:16:34.734 "uuid": "03f28f95-606e-42e6-a359-c969f742d60d", 00:16:34.734 "is_configured": true, 00:16:34.734 "data_offset": 0, 00:16:34.734 "data_size": 65536 00:16:34.734 } 00:16:34.734 ] 00:16:34.734 }' 00:16:34.734 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.734 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.301 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.301 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.301 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.301 18:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:35.301 18:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.301 [2024-11-26 18:01:17.046981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.301 BaseBdev1 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.301 [ 00:16:35.301 { 00:16:35.301 "name": "BaseBdev1", 00:16:35.301 "aliases": [ 00:16:35.301 "d3ac2a95-65d9-4eda-8bba-58ae8b4d5a5e" 00:16:35.301 ], 00:16:35.301 "product_name": "Malloc disk", 00:16:35.301 "block_size": 512, 00:16:35.301 "num_blocks": 65536, 00:16:35.301 "uuid": "d3ac2a95-65d9-4eda-8bba-58ae8b4d5a5e", 00:16:35.301 "assigned_rate_limits": { 00:16:35.301 "rw_ios_per_sec": 0, 00:16:35.301 "rw_mbytes_per_sec": 0, 00:16:35.301 "r_mbytes_per_sec": 0, 00:16:35.301 "w_mbytes_per_sec": 0 00:16:35.301 }, 00:16:35.301 "claimed": true, 00:16:35.301 "claim_type": "exclusive_write", 00:16:35.301 "zoned": false, 00:16:35.301 "supported_io_types": { 00:16:35.301 "read": true, 00:16:35.301 "write": true, 00:16:35.301 "unmap": true, 00:16:35.301 "flush": true, 00:16:35.301 "reset": true, 00:16:35.301 "nvme_admin": false, 00:16:35.301 "nvme_io": false, 00:16:35.301 "nvme_io_md": false, 00:16:35.301 "write_zeroes": true, 00:16:35.301 "zcopy": true, 00:16:35.301 "get_zone_info": false, 00:16:35.301 "zone_management": false, 00:16:35.301 "zone_append": false, 00:16:35.301 "compare": false, 00:16:35.301 "compare_and_write": false, 00:16:35.301 "abort": true, 00:16:35.301 "seek_hole": false, 00:16:35.301 "seek_data": false, 00:16:35.301 "copy": true, 00:16:35.301 "nvme_iov_md": false 00:16:35.301 }, 00:16:35.301 "memory_domains": [ 00:16:35.301 { 00:16:35.301 "dma_device_id": "system", 00:16:35.301 "dma_device_type": 1 00:16:35.301 }, 00:16:35.301 { 00:16:35.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.301 "dma_device_type": 2 00:16:35.301 } 00:16:35.301 ], 00:16:35.301 "driver_specific": {} 00:16:35.301 } 00:16:35.301 ] 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.301 "name": "Existed_Raid", 00:16:35.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.301 "strip_size_kb": 64, 00:16:35.301 "state": "configuring", 00:16:35.301 "raid_level": "raid5f", 00:16:35.301 "superblock": false, 00:16:35.301 "num_base_bdevs": 4, 00:16:35.301 "num_base_bdevs_discovered": 3, 00:16:35.301 "num_base_bdevs_operational": 4, 00:16:35.301 "base_bdevs_list": [ 00:16:35.301 { 00:16:35.301 "name": "BaseBdev1", 00:16:35.301 "uuid": "d3ac2a95-65d9-4eda-8bba-58ae8b4d5a5e", 00:16:35.301 "is_configured": true, 00:16:35.301 "data_offset": 0, 00:16:35.301 "data_size": 65536 00:16:35.301 }, 00:16:35.301 { 00:16:35.301 "name": null, 00:16:35.301 "uuid": "b162d4e0-3a4e-4838-a059-7687c71685d1", 00:16:35.301 "is_configured": false, 00:16:35.301 "data_offset": 0, 00:16:35.301 "data_size": 65536 00:16:35.301 }, 00:16:35.301 { 00:16:35.301 "name": "BaseBdev3", 00:16:35.301 "uuid": "73be6a8e-c5e0-4555-9d05-4c55f6679645", 00:16:35.301 "is_configured": true, 00:16:35.301 "data_offset": 0, 00:16:35.301 "data_size": 65536 00:16:35.301 }, 00:16:35.301 { 00:16:35.301 "name": "BaseBdev4", 00:16:35.301 "uuid": "03f28f95-606e-42e6-a359-c969f742d60d", 00:16:35.301 "is_configured": true, 00:16:35.301 "data_offset": 0, 00:16:35.301 "data_size": 65536 00:16:35.301 } 00:16:35.301 ] 00:16:35.301 }' 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.301 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.869 [2024-11-26 18:01:17.590176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.869 "name": "Existed_Raid", 00:16:35.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.869 "strip_size_kb": 64, 00:16:35.869 "state": "configuring", 00:16:35.869 "raid_level": "raid5f", 00:16:35.869 "superblock": false, 00:16:35.869 "num_base_bdevs": 4, 00:16:35.869 "num_base_bdevs_discovered": 2, 00:16:35.869 "num_base_bdevs_operational": 4, 00:16:35.869 "base_bdevs_list": [ 00:16:35.869 { 00:16:35.869 "name": "BaseBdev1", 00:16:35.869 "uuid": "d3ac2a95-65d9-4eda-8bba-58ae8b4d5a5e", 00:16:35.869 "is_configured": true, 00:16:35.869 "data_offset": 0, 00:16:35.869 "data_size": 65536 00:16:35.869 }, 00:16:35.869 { 00:16:35.869 "name": null, 00:16:35.869 "uuid": "b162d4e0-3a4e-4838-a059-7687c71685d1", 00:16:35.869 "is_configured": false, 00:16:35.869 "data_offset": 0, 00:16:35.869 "data_size": 65536 00:16:35.869 }, 00:16:35.869 { 00:16:35.869 "name": null, 00:16:35.869 "uuid": "73be6a8e-c5e0-4555-9d05-4c55f6679645", 00:16:35.869 "is_configured": false, 00:16:35.869 "data_offset": 0, 00:16:35.869 "data_size": 65536 00:16:35.869 }, 00:16:35.869 { 00:16:35.869 "name": "BaseBdev4", 00:16:35.869 "uuid": "03f28f95-606e-42e6-a359-c969f742d60d", 00:16:35.869 "is_configured": true, 00:16:35.869 "data_offset": 0, 00:16:35.869 "data_size": 65536 00:16:35.869 } 00:16:35.869 ] 00:16:35.869 }' 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.869 18:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.481 [2024-11-26 18:01:18.117314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.481 "name": "Existed_Raid", 00:16:36.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.481 "strip_size_kb": 64, 00:16:36.481 "state": "configuring", 00:16:36.481 "raid_level": "raid5f", 00:16:36.481 "superblock": false, 00:16:36.481 "num_base_bdevs": 4, 00:16:36.481 "num_base_bdevs_discovered": 3, 00:16:36.481 "num_base_bdevs_operational": 4, 00:16:36.481 "base_bdevs_list": [ 00:16:36.481 { 00:16:36.481 "name": "BaseBdev1", 00:16:36.481 "uuid": "d3ac2a95-65d9-4eda-8bba-58ae8b4d5a5e", 00:16:36.481 "is_configured": true, 00:16:36.481 "data_offset": 0, 00:16:36.481 "data_size": 65536 00:16:36.481 }, 00:16:36.481 { 00:16:36.481 "name": null, 00:16:36.481 "uuid": "b162d4e0-3a4e-4838-a059-7687c71685d1", 00:16:36.481 "is_configured": false, 00:16:36.481 "data_offset": 0, 00:16:36.481 "data_size": 65536 00:16:36.481 }, 00:16:36.481 { 00:16:36.481 "name": "BaseBdev3", 00:16:36.481 "uuid": "73be6a8e-c5e0-4555-9d05-4c55f6679645", 00:16:36.481 "is_configured": true, 00:16:36.481 "data_offset": 0, 00:16:36.481 "data_size": 65536 00:16:36.481 }, 00:16:36.481 { 00:16:36.481 "name": "BaseBdev4", 00:16:36.481 "uuid": "03f28f95-606e-42e6-a359-c969f742d60d", 00:16:36.481 "is_configured": true, 00:16:36.481 "data_offset": 0, 00:16:36.481 "data_size": 65536 00:16:36.481 } 00:16:36.481 ] 00:16:36.481 }' 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.481 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.048 [2024-11-26 18:01:18.656449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.048 "name": "Existed_Raid", 00:16:37.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.048 "strip_size_kb": 64, 00:16:37.048 "state": "configuring", 00:16:37.048 "raid_level": "raid5f", 00:16:37.048 "superblock": false, 00:16:37.048 "num_base_bdevs": 4, 00:16:37.048 "num_base_bdevs_discovered": 2, 00:16:37.048 "num_base_bdevs_operational": 4, 00:16:37.048 "base_bdevs_list": [ 00:16:37.048 { 00:16:37.048 "name": null, 00:16:37.048 "uuid": "d3ac2a95-65d9-4eda-8bba-58ae8b4d5a5e", 00:16:37.048 "is_configured": false, 00:16:37.048 "data_offset": 0, 00:16:37.048 "data_size": 65536 00:16:37.048 }, 00:16:37.048 { 00:16:37.048 "name": null, 00:16:37.048 "uuid": "b162d4e0-3a4e-4838-a059-7687c71685d1", 00:16:37.048 "is_configured": false, 00:16:37.048 "data_offset": 0, 00:16:37.048 "data_size": 65536 00:16:37.048 }, 00:16:37.048 { 00:16:37.048 "name": "BaseBdev3", 00:16:37.048 "uuid": "73be6a8e-c5e0-4555-9d05-4c55f6679645", 00:16:37.048 "is_configured": true, 00:16:37.048 "data_offset": 0, 00:16:37.048 "data_size": 65536 00:16:37.048 }, 00:16:37.048 { 00:16:37.048 "name": "BaseBdev4", 00:16:37.048 "uuid": "03f28f95-606e-42e6-a359-c969f742d60d", 00:16:37.048 "is_configured": true, 00:16:37.048 "data_offset": 0, 00:16:37.048 "data_size": 65536 00:16:37.048 } 00:16:37.048 ] 00:16:37.048 }' 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.048 18:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.616 [2024-11-26 18:01:19.237258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.616 "name": "Existed_Raid", 00:16:37.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.616 "strip_size_kb": 64, 00:16:37.616 "state": "configuring", 00:16:37.616 "raid_level": "raid5f", 00:16:37.616 "superblock": false, 00:16:37.616 "num_base_bdevs": 4, 00:16:37.616 "num_base_bdevs_discovered": 3, 00:16:37.616 "num_base_bdevs_operational": 4, 00:16:37.616 "base_bdevs_list": [ 00:16:37.616 { 00:16:37.616 "name": null, 00:16:37.616 "uuid": "d3ac2a95-65d9-4eda-8bba-58ae8b4d5a5e", 00:16:37.616 "is_configured": false, 00:16:37.616 "data_offset": 0, 00:16:37.616 "data_size": 65536 00:16:37.616 }, 00:16:37.616 { 00:16:37.616 "name": "BaseBdev2", 00:16:37.616 "uuid": "b162d4e0-3a4e-4838-a059-7687c71685d1", 00:16:37.616 "is_configured": true, 00:16:37.616 "data_offset": 0, 00:16:37.616 "data_size": 65536 00:16:37.616 }, 00:16:37.616 { 00:16:37.616 "name": "BaseBdev3", 00:16:37.616 "uuid": "73be6a8e-c5e0-4555-9d05-4c55f6679645", 00:16:37.616 "is_configured": true, 00:16:37.616 "data_offset": 0, 00:16:37.616 "data_size": 65536 00:16:37.616 }, 00:16:37.616 { 00:16:37.616 "name": "BaseBdev4", 00:16:37.616 "uuid": "03f28f95-606e-42e6-a359-c969f742d60d", 00:16:37.616 "is_configured": true, 00:16:37.616 "data_offset": 0, 00:16:37.616 "data_size": 65536 00:16:37.616 } 00:16:37.616 ] 00:16:37.616 }' 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.616 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d3ac2a95-65d9-4eda-8bba-58ae8b4d5a5e 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.185 [2024-11-26 18:01:19.887968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:38.185 [2024-11-26 18:01:19.888132] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:38.185 [2024-11-26 18:01:19.888163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:38.185 [2024-11-26 18:01:19.888487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:38.185 [2024-11-26 18:01:19.896543] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:38.185 [2024-11-26 18:01:19.896574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:38.185 [2024-11-26 18:01:19.896887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.185 NewBaseBdev 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.185 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.186 [ 00:16:38.186 { 00:16:38.186 "name": "NewBaseBdev", 00:16:38.186 "aliases": [ 00:16:38.186 "d3ac2a95-65d9-4eda-8bba-58ae8b4d5a5e" 00:16:38.186 ], 00:16:38.186 "product_name": "Malloc disk", 00:16:38.186 "block_size": 512, 00:16:38.186 "num_blocks": 65536, 00:16:38.186 "uuid": "d3ac2a95-65d9-4eda-8bba-58ae8b4d5a5e", 00:16:38.186 "assigned_rate_limits": { 00:16:38.186 "rw_ios_per_sec": 0, 00:16:38.186 "rw_mbytes_per_sec": 0, 00:16:38.186 "r_mbytes_per_sec": 0, 00:16:38.186 "w_mbytes_per_sec": 0 00:16:38.186 }, 00:16:38.186 "claimed": true, 00:16:38.186 "claim_type": "exclusive_write", 00:16:38.186 "zoned": false, 00:16:38.186 "supported_io_types": { 00:16:38.186 "read": true, 00:16:38.186 "write": true, 00:16:38.186 "unmap": true, 00:16:38.186 "flush": true, 00:16:38.186 "reset": true, 00:16:38.186 "nvme_admin": false, 00:16:38.186 "nvme_io": false, 00:16:38.186 "nvme_io_md": false, 00:16:38.186 "write_zeroes": true, 00:16:38.186 "zcopy": true, 00:16:38.186 "get_zone_info": false, 00:16:38.186 "zone_management": false, 00:16:38.186 "zone_append": false, 00:16:38.186 "compare": false, 00:16:38.186 "compare_and_write": false, 00:16:38.186 "abort": true, 00:16:38.186 "seek_hole": false, 00:16:38.186 "seek_data": false, 00:16:38.186 "copy": true, 00:16:38.186 "nvme_iov_md": false 00:16:38.186 }, 00:16:38.186 "memory_domains": [ 00:16:38.186 { 00:16:38.186 "dma_device_id": "system", 00:16:38.186 "dma_device_type": 1 00:16:38.186 }, 00:16:38.186 { 00:16:38.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.186 "dma_device_type": 2 00:16:38.186 } 00:16:38.186 ], 00:16:38.186 "driver_specific": {} 00:16:38.186 } 00:16:38.186 ] 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.186 "name": "Existed_Raid", 00:16:38.186 "uuid": "392bd53e-2a70-4932-900b-7ef686a7c62f", 00:16:38.186 "strip_size_kb": 64, 00:16:38.186 "state": "online", 00:16:38.186 "raid_level": "raid5f", 00:16:38.186 "superblock": false, 00:16:38.186 "num_base_bdevs": 4, 00:16:38.186 "num_base_bdevs_discovered": 4, 00:16:38.186 "num_base_bdevs_operational": 4, 00:16:38.186 "base_bdevs_list": [ 00:16:38.186 { 00:16:38.186 "name": "NewBaseBdev", 00:16:38.186 "uuid": "d3ac2a95-65d9-4eda-8bba-58ae8b4d5a5e", 00:16:38.186 "is_configured": true, 00:16:38.186 "data_offset": 0, 00:16:38.186 "data_size": 65536 00:16:38.186 }, 00:16:38.186 { 00:16:38.186 "name": "BaseBdev2", 00:16:38.186 "uuid": "b162d4e0-3a4e-4838-a059-7687c71685d1", 00:16:38.186 "is_configured": true, 00:16:38.186 "data_offset": 0, 00:16:38.186 "data_size": 65536 00:16:38.186 }, 00:16:38.186 { 00:16:38.186 "name": "BaseBdev3", 00:16:38.186 "uuid": "73be6a8e-c5e0-4555-9d05-4c55f6679645", 00:16:38.186 "is_configured": true, 00:16:38.186 "data_offset": 0, 00:16:38.186 "data_size": 65536 00:16:38.186 }, 00:16:38.186 { 00:16:38.186 "name": "BaseBdev4", 00:16:38.186 "uuid": "03f28f95-606e-42e6-a359-c969f742d60d", 00:16:38.186 "is_configured": true, 00:16:38.186 "data_offset": 0, 00:16:38.186 "data_size": 65536 00:16:38.186 } 00:16:38.186 ] 00:16:38.186 }' 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.186 18:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.755 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:38.755 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:38.755 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:38.755 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:38.755 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:38.755 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:38.755 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:38.755 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:38.755 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.755 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.755 [2024-11-26 18:01:20.449589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.755 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.755 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:38.755 "name": "Existed_Raid", 00:16:38.755 "aliases": [ 00:16:38.755 "392bd53e-2a70-4932-900b-7ef686a7c62f" 00:16:38.755 ], 00:16:38.755 "product_name": "Raid Volume", 00:16:38.755 "block_size": 512, 00:16:38.755 "num_blocks": 196608, 00:16:38.755 "uuid": "392bd53e-2a70-4932-900b-7ef686a7c62f", 00:16:38.755 "assigned_rate_limits": { 00:16:38.755 "rw_ios_per_sec": 0, 00:16:38.755 "rw_mbytes_per_sec": 0, 00:16:38.755 "r_mbytes_per_sec": 0, 00:16:38.755 "w_mbytes_per_sec": 0 00:16:38.755 }, 00:16:38.755 "claimed": false, 00:16:38.755 "zoned": false, 00:16:38.755 "supported_io_types": { 00:16:38.756 "read": true, 00:16:38.756 "write": true, 00:16:38.756 "unmap": false, 00:16:38.756 "flush": false, 00:16:38.756 "reset": true, 00:16:38.756 "nvme_admin": false, 00:16:38.756 "nvme_io": false, 00:16:38.756 "nvme_io_md": false, 00:16:38.756 "write_zeroes": true, 00:16:38.756 "zcopy": false, 00:16:38.756 "get_zone_info": false, 00:16:38.756 "zone_management": false, 00:16:38.756 "zone_append": false, 00:16:38.756 "compare": false, 00:16:38.756 "compare_and_write": false, 00:16:38.756 "abort": false, 00:16:38.756 "seek_hole": false, 00:16:38.756 "seek_data": false, 00:16:38.756 "copy": false, 00:16:38.756 "nvme_iov_md": false 00:16:38.756 }, 00:16:38.756 "driver_specific": { 00:16:38.756 "raid": { 00:16:38.756 "uuid": "392bd53e-2a70-4932-900b-7ef686a7c62f", 00:16:38.756 "strip_size_kb": 64, 00:16:38.756 "state": "online", 00:16:38.756 "raid_level": "raid5f", 00:16:38.756 "superblock": false, 00:16:38.756 "num_base_bdevs": 4, 00:16:38.756 "num_base_bdevs_discovered": 4, 00:16:38.756 "num_base_bdevs_operational": 4, 00:16:38.756 "base_bdevs_list": [ 00:16:38.756 { 00:16:38.756 "name": "NewBaseBdev", 00:16:38.756 "uuid": "d3ac2a95-65d9-4eda-8bba-58ae8b4d5a5e", 00:16:38.756 "is_configured": true, 00:16:38.756 "data_offset": 0, 00:16:38.756 "data_size": 65536 00:16:38.756 }, 00:16:38.756 { 00:16:38.756 "name": "BaseBdev2", 00:16:38.756 "uuid": "b162d4e0-3a4e-4838-a059-7687c71685d1", 00:16:38.756 "is_configured": true, 00:16:38.756 "data_offset": 0, 00:16:38.756 "data_size": 65536 00:16:38.756 }, 00:16:38.756 { 00:16:38.756 "name": "BaseBdev3", 00:16:38.756 "uuid": "73be6a8e-c5e0-4555-9d05-4c55f6679645", 00:16:38.756 "is_configured": true, 00:16:38.756 "data_offset": 0, 00:16:38.756 "data_size": 65536 00:16:38.756 }, 00:16:38.756 { 00:16:38.756 "name": "BaseBdev4", 00:16:38.756 "uuid": "03f28f95-606e-42e6-a359-c969f742d60d", 00:16:38.756 "is_configured": true, 00:16:38.756 "data_offset": 0, 00:16:38.756 "data_size": 65536 00:16:38.756 } 00:16:38.756 ] 00:16:38.756 } 00:16:38.756 } 00:16:38.756 }' 00:16:38.756 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.756 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:38.756 BaseBdev2 00:16:38.756 BaseBdev3 00:16:38.756 BaseBdev4' 00:16:38.756 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.756 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:38.756 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.756 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:38.756 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.756 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.756 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.016 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.016 [2024-11-26 18:01:20.800687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:39.016 [2024-11-26 18:01:20.800853] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.016 [2024-11-26 18:01:20.801011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.017 [2024-11-26 18:01:20.801523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.017 [2024-11-26 18:01:20.801621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:39.017 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.017 18:01:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83177 00:16:39.017 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83177 ']' 00:16:39.017 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83177 00:16:39.017 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:39.017 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.017 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83177 00:16:39.017 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.017 killing process with pid 83177 00:16:39.017 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.017 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83177' 00:16:39.017 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83177 00:16:39.017 [2024-11-26 18:01:20.850688] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.017 18:01:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83177 00:16:39.584 [2024-11-26 18:01:21.314001] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:40.965 00:16:40.965 real 0m12.177s 00:16:40.965 user 0m19.202s 00:16:40.965 sys 0m2.160s 00:16:40.965 ************************************ 00:16:40.965 END TEST raid5f_state_function_test 00:16:40.965 ************************************ 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.965 18:01:22 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:40.965 18:01:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:40.965 18:01:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.965 18:01:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.965 ************************************ 00:16:40.965 START TEST raid5f_state_function_test_sb 00:16:40.965 ************************************ 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83856 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83856' 00:16:40.965 Process raid pid: 83856 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83856 00:16:40.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83856 ']' 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.965 18:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.965 [2024-11-26 18:01:22.804187] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:16:40.966 [2024-11-26 18:01:22.804339] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.226 [2024-11-26 18:01:22.982039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.484 [2024-11-26 18:01:23.129542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.743 [2024-11-26 18:01:23.382356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.743 [2024-11-26 18:01:23.382419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.003 [2024-11-26 18:01:23.641241] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:42.003 [2024-11-26 18:01:23.641320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:42.003 [2024-11-26 18:01:23.641333] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.003 [2024-11-26 18:01:23.641346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.003 [2024-11-26 18:01:23.641354] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.003 [2024-11-26 18:01:23.641366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.003 [2024-11-26 18:01:23.641374] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:42.003 [2024-11-26 18:01:23.641386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.003 "name": "Existed_Raid", 00:16:42.003 "uuid": "05060b29-aa88-40f2-9b00-e4e678a94232", 00:16:42.003 "strip_size_kb": 64, 00:16:42.003 "state": "configuring", 00:16:42.003 "raid_level": "raid5f", 00:16:42.003 "superblock": true, 00:16:42.003 "num_base_bdevs": 4, 00:16:42.003 "num_base_bdevs_discovered": 0, 00:16:42.003 "num_base_bdevs_operational": 4, 00:16:42.003 "base_bdevs_list": [ 00:16:42.003 { 00:16:42.003 "name": "BaseBdev1", 00:16:42.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.003 "is_configured": false, 00:16:42.003 "data_offset": 0, 00:16:42.003 "data_size": 0 00:16:42.003 }, 00:16:42.003 { 00:16:42.003 "name": "BaseBdev2", 00:16:42.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.003 "is_configured": false, 00:16:42.003 "data_offset": 0, 00:16:42.003 "data_size": 0 00:16:42.003 }, 00:16:42.003 { 00:16:42.003 "name": "BaseBdev3", 00:16:42.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.003 "is_configured": false, 00:16:42.003 "data_offset": 0, 00:16:42.003 "data_size": 0 00:16:42.003 }, 00:16:42.003 { 00:16:42.003 "name": "BaseBdev4", 00:16:42.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.003 "is_configured": false, 00:16:42.003 "data_offset": 0, 00:16:42.003 "data_size": 0 00:16:42.003 } 00:16:42.003 ] 00:16:42.003 }' 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.003 18:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.264 [2024-11-26 18:01:24.024582] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.264 [2024-11-26 18:01:24.024753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.264 [2024-11-26 18:01:24.036566] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:42.264 [2024-11-26 18:01:24.036644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:42.264 [2024-11-26 18:01:24.036657] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.264 [2024-11-26 18:01:24.036671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.264 [2024-11-26 18:01:24.036679] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.264 [2024-11-26 18:01:24.036692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.264 [2024-11-26 18:01:24.036701] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:42.264 [2024-11-26 18:01:24.036713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.264 [2024-11-26 18:01:24.095312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.264 BaseBdev1 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.264 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.264 [ 00:16:42.264 { 00:16:42.264 "name": "BaseBdev1", 00:16:42.264 "aliases": [ 00:16:42.264 "6e6a0961-eb54-436a-ae4e-3d322b143c7b" 00:16:42.264 ], 00:16:42.264 "product_name": "Malloc disk", 00:16:42.264 "block_size": 512, 00:16:42.264 "num_blocks": 65536, 00:16:42.264 "uuid": "6e6a0961-eb54-436a-ae4e-3d322b143c7b", 00:16:42.264 "assigned_rate_limits": { 00:16:42.264 "rw_ios_per_sec": 0, 00:16:42.264 "rw_mbytes_per_sec": 0, 00:16:42.264 "r_mbytes_per_sec": 0, 00:16:42.264 "w_mbytes_per_sec": 0 00:16:42.264 }, 00:16:42.264 "claimed": true, 00:16:42.264 "claim_type": "exclusive_write", 00:16:42.525 "zoned": false, 00:16:42.525 "supported_io_types": { 00:16:42.525 "read": true, 00:16:42.525 "write": true, 00:16:42.525 "unmap": true, 00:16:42.525 "flush": true, 00:16:42.525 "reset": true, 00:16:42.525 "nvme_admin": false, 00:16:42.525 "nvme_io": false, 00:16:42.525 "nvme_io_md": false, 00:16:42.525 "write_zeroes": true, 00:16:42.525 "zcopy": true, 00:16:42.525 "get_zone_info": false, 00:16:42.525 "zone_management": false, 00:16:42.525 "zone_append": false, 00:16:42.525 "compare": false, 00:16:42.525 "compare_and_write": false, 00:16:42.525 "abort": true, 00:16:42.525 "seek_hole": false, 00:16:42.525 "seek_data": false, 00:16:42.525 "copy": true, 00:16:42.525 "nvme_iov_md": false 00:16:42.525 }, 00:16:42.525 "memory_domains": [ 00:16:42.525 { 00:16:42.525 "dma_device_id": "system", 00:16:42.525 "dma_device_type": 1 00:16:42.525 }, 00:16:42.525 { 00:16:42.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.525 "dma_device_type": 2 00:16:42.525 } 00:16:42.525 ], 00:16:42.525 "driver_specific": {} 00:16:42.525 } 00:16:42.525 ] 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.525 "name": "Existed_Raid", 00:16:42.525 "uuid": "91d80388-c76d-4762-b72e-d6c8a1debdff", 00:16:42.525 "strip_size_kb": 64, 00:16:42.525 "state": "configuring", 00:16:42.525 "raid_level": "raid5f", 00:16:42.525 "superblock": true, 00:16:42.525 "num_base_bdevs": 4, 00:16:42.525 "num_base_bdevs_discovered": 1, 00:16:42.525 "num_base_bdevs_operational": 4, 00:16:42.525 "base_bdevs_list": [ 00:16:42.525 { 00:16:42.525 "name": "BaseBdev1", 00:16:42.525 "uuid": "6e6a0961-eb54-436a-ae4e-3d322b143c7b", 00:16:42.525 "is_configured": true, 00:16:42.525 "data_offset": 2048, 00:16:42.525 "data_size": 63488 00:16:42.525 }, 00:16:42.525 { 00:16:42.525 "name": "BaseBdev2", 00:16:42.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.525 "is_configured": false, 00:16:42.525 "data_offset": 0, 00:16:42.525 "data_size": 0 00:16:42.525 }, 00:16:42.525 { 00:16:42.525 "name": "BaseBdev3", 00:16:42.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.525 "is_configured": false, 00:16:42.525 "data_offset": 0, 00:16:42.525 "data_size": 0 00:16:42.525 }, 00:16:42.525 { 00:16:42.525 "name": "BaseBdev4", 00:16:42.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.525 "is_configured": false, 00:16:42.525 "data_offset": 0, 00:16:42.525 "data_size": 0 00:16:42.525 } 00:16:42.525 ] 00:16:42.525 }' 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.525 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.785 [2024-11-26 18:01:24.538667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.785 [2024-11-26 18:01:24.538857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.785 [2024-11-26 18:01:24.550747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.785 [2024-11-26 18:01:24.553116] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.785 [2024-11-26 18:01:24.553225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.785 [2024-11-26 18:01:24.553279] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.785 [2024-11-26 18:01:24.553321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.785 [2024-11-26 18:01:24.553356] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:42.785 [2024-11-26 18:01:24.553396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.785 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.785 "name": "Existed_Raid", 00:16:42.785 "uuid": "1f9d2828-55b7-4166-825c-1e37e0284a48", 00:16:42.785 "strip_size_kb": 64, 00:16:42.785 "state": "configuring", 00:16:42.785 "raid_level": "raid5f", 00:16:42.785 "superblock": true, 00:16:42.785 "num_base_bdevs": 4, 00:16:42.785 "num_base_bdevs_discovered": 1, 00:16:42.785 "num_base_bdevs_operational": 4, 00:16:42.785 "base_bdevs_list": [ 00:16:42.785 { 00:16:42.785 "name": "BaseBdev1", 00:16:42.785 "uuid": "6e6a0961-eb54-436a-ae4e-3d322b143c7b", 00:16:42.785 "is_configured": true, 00:16:42.785 "data_offset": 2048, 00:16:42.785 "data_size": 63488 00:16:42.785 }, 00:16:42.785 { 00:16:42.785 "name": "BaseBdev2", 00:16:42.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.785 "is_configured": false, 00:16:42.785 "data_offset": 0, 00:16:42.785 "data_size": 0 00:16:42.785 }, 00:16:42.785 { 00:16:42.785 "name": "BaseBdev3", 00:16:42.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.785 "is_configured": false, 00:16:42.785 "data_offset": 0, 00:16:42.785 "data_size": 0 00:16:42.785 }, 00:16:42.785 { 00:16:42.785 "name": "BaseBdev4", 00:16:42.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.785 "is_configured": false, 00:16:42.785 "data_offset": 0, 00:16:42.785 "data_size": 0 00:16:42.785 } 00:16:42.786 ] 00:16:42.786 }' 00:16:42.786 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.786 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.357 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:43.357 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.357 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.357 [2024-11-26 18:01:24.995531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.357 BaseBdev2 00:16:43.357 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.357 18:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:43.357 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:43.357 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:43.357 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:43.357 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:43.357 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:43.357 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:43.357 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.357 18:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.357 [ 00:16:43.357 { 00:16:43.357 "name": "BaseBdev2", 00:16:43.357 "aliases": [ 00:16:43.357 "6c89dc36-f922-474c-b52f-c3a9533ba2a6" 00:16:43.357 ], 00:16:43.357 "product_name": "Malloc disk", 00:16:43.357 "block_size": 512, 00:16:43.357 "num_blocks": 65536, 00:16:43.357 "uuid": "6c89dc36-f922-474c-b52f-c3a9533ba2a6", 00:16:43.357 "assigned_rate_limits": { 00:16:43.357 "rw_ios_per_sec": 0, 00:16:43.357 "rw_mbytes_per_sec": 0, 00:16:43.357 "r_mbytes_per_sec": 0, 00:16:43.357 "w_mbytes_per_sec": 0 00:16:43.357 }, 00:16:43.357 "claimed": true, 00:16:43.357 "claim_type": "exclusive_write", 00:16:43.357 "zoned": false, 00:16:43.357 "supported_io_types": { 00:16:43.357 "read": true, 00:16:43.357 "write": true, 00:16:43.357 "unmap": true, 00:16:43.357 "flush": true, 00:16:43.357 "reset": true, 00:16:43.357 "nvme_admin": false, 00:16:43.357 "nvme_io": false, 00:16:43.357 "nvme_io_md": false, 00:16:43.357 "write_zeroes": true, 00:16:43.357 "zcopy": true, 00:16:43.357 "get_zone_info": false, 00:16:43.357 "zone_management": false, 00:16:43.357 "zone_append": false, 00:16:43.357 "compare": false, 00:16:43.357 "compare_and_write": false, 00:16:43.357 "abort": true, 00:16:43.357 "seek_hole": false, 00:16:43.357 "seek_data": false, 00:16:43.357 "copy": true, 00:16:43.357 "nvme_iov_md": false 00:16:43.357 }, 00:16:43.357 "memory_domains": [ 00:16:43.357 { 00:16:43.357 "dma_device_id": "system", 00:16:43.357 "dma_device_type": 1 00:16:43.357 }, 00:16:43.357 { 00:16:43.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.357 "dma_device_type": 2 00:16:43.357 } 00:16:43.357 ], 00:16:43.357 "driver_specific": {} 00:16:43.357 } 00:16:43.357 ] 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.357 "name": "Existed_Raid", 00:16:43.357 "uuid": "1f9d2828-55b7-4166-825c-1e37e0284a48", 00:16:43.357 "strip_size_kb": 64, 00:16:43.357 "state": "configuring", 00:16:43.357 "raid_level": "raid5f", 00:16:43.357 "superblock": true, 00:16:43.357 "num_base_bdevs": 4, 00:16:43.357 "num_base_bdevs_discovered": 2, 00:16:43.357 "num_base_bdevs_operational": 4, 00:16:43.357 "base_bdevs_list": [ 00:16:43.357 { 00:16:43.357 "name": "BaseBdev1", 00:16:43.357 "uuid": "6e6a0961-eb54-436a-ae4e-3d322b143c7b", 00:16:43.357 "is_configured": true, 00:16:43.357 "data_offset": 2048, 00:16:43.357 "data_size": 63488 00:16:43.357 }, 00:16:43.357 { 00:16:43.357 "name": "BaseBdev2", 00:16:43.357 "uuid": "6c89dc36-f922-474c-b52f-c3a9533ba2a6", 00:16:43.357 "is_configured": true, 00:16:43.357 "data_offset": 2048, 00:16:43.357 "data_size": 63488 00:16:43.357 }, 00:16:43.357 { 00:16:43.357 "name": "BaseBdev3", 00:16:43.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.357 "is_configured": false, 00:16:43.357 "data_offset": 0, 00:16:43.357 "data_size": 0 00:16:43.357 }, 00:16:43.357 { 00:16:43.357 "name": "BaseBdev4", 00:16:43.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.357 "is_configured": false, 00:16:43.357 "data_offset": 0, 00:16:43.357 "data_size": 0 00:16:43.357 } 00:16:43.357 ] 00:16:43.357 }' 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.357 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.926 [2024-11-26 18:01:25.575337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.926 BaseBdev3 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.926 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.926 [ 00:16:43.926 { 00:16:43.926 "name": "BaseBdev3", 00:16:43.926 "aliases": [ 00:16:43.926 "371345f9-5356-40e1-8477-1442433dd3a4" 00:16:43.926 ], 00:16:43.926 "product_name": "Malloc disk", 00:16:43.926 "block_size": 512, 00:16:43.927 "num_blocks": 65536, 00:16:43.927 "uuid": "371345f9-5356-40e1-8477-1442433dd3a4", 00:16:43.927 "assigned_rate_limits": { 00:16:43.927 "rw_ios_per_sec": 0, 00:16:43.927 "rw_mbytes_per_sec": 0, 00:16:43.927 "r_mbytes_per_sec": 0, 00:16:43.927 "w_mbytes_per_sec": 0 00:16:43.927 }, 00:16:43.927 "claimed": true, 00:16:43.927 "claim_type": "exclusive_write", 00:16:43.927 "zoned": false, 00:16:43.927 "supported_io_types": { 00:16:43.927 "read": true, 00:16:43.927 "write": true, 00:16:43.927 "unmap": true, 00:16:43.927 "flush": true, 00:16:43.927 "reset": true, 00:16:43.927 "nvme_admin": false, 00:16:43.927 "nvme_io": false, 00:16:43.927 "nvme_io_md": false, 00:16:43.927 "write_zeroes": true, 00:16:43.927 "zcopy": true, 00:16:43.927 "get_zone_info": false, 00:16:43.927 "zone_management": false, 00:16:43.927 "zone_append": false, 00:16:43.927 "compare": false, 00:16:43.927 "compare_and_write": false, 00:16:43.927 "abort": true, 00:16:43.927 "seek_hole": false, 00:16:43.927 "seek_data": false, 00:16:43.927 "copy": true, 00:16:43.927 "nvme_iov_md": false 00:16:43.927 }, 00:16:43.927 "memory_domains": [ 00:16:43.927 { 00:16:43.927 "dma_device_id": "system", 00:16:43.927 "dma_device_type": 1 00:16:43.927 }, 00:16:43.927 { 00:16:43.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.927 "dma_device_type": 2 00:16:43.927 } 00:16:43.927 ], 00:16:43.927 "driver_specific": {} 00:16:43.927 } 00:16:43.927 ] 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.927 "name": "Existed_Raid", 00:16:43.927 "uuid": "1f9d2828-55b7-4166-825c-1e37e0284a48", 00:16:43.927 "strip_size_kb": 64, 00:16:43.927 "state": "configuring", 00:16:43.927 "raid_level": "raid5f", 00:16:43.927 "superblock": true, 00:16:43.927 "num_base_bdevs": 4, 00:16:43.927 "num_base_bdevs_discovered": 3, 00:16:43.927 "num_base_bdevs_operational": 4, 00:16:43.927 "base_bdevs_list": [ 00:16:43.927 { 00:16:43.927 "name": "BaseBdev1", 00:16:43.927 "uuid": "6e6a0961-eb54-436a-ae4e-3d322b143c7b", 00:16:43.927 "is_configured": true, 00:16:43.927 "data_offset": 2048, 00:16:43.927 "data_size": 63488 00:16:43.927 }, 00:16:43.927 { 00:16:43.927 "name": "BaseBdev2", 00:16:43.927 "uuid": "6c89dc36-f922-474c-b52f-c3a9533ba2a6", 00:16:43.927 "is_configured": true, 00:16:43.927 "data_offset": 2048, 00:16:43.927 "data_size": 63488 00:16:43.927 }, 00:16:43.927 { 00:16:43.927 "name": "BaseBdev3", 00:16:43.927 "uuid": "371345f9-5356-40e1-8477-1442433dd3a4", 00:16:43.927 "is_configured": true, 00:16:43.927 "data_offset": 2048, 00:16:43.927 "data_size": 63488 00:16:43.927 }, 00:16:43.927 { 00:16:43.927 "name": "BaseBdev4", 00:16:43.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.927 "is_configured": false, 00:16:43.927 "data_offset": 0, 00:16:43.927 "data_size": 0 00:16:43.927 } 00:16:43.927 ] 00:16:43.927 }' 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.927 18:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.187 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:44.187 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.187 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.447 [2024-11-26 18:01:26.092599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:44.447 [2024-11-26 18:01:26.093143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:44.447 [2024-11-26 18:01:26.093214] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:44.447 [2024-11-26 18:01:26.093589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:44.447 BaseBdev4 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.447 [2024-11-26 18:01:26.101594] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:44.447 [2024-11-26 18:01:26.101686] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:44.447 [2024-11-26 18:01:26.102146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.447 [ 00:16:44.447 { 00:16:44.447 "name": "BaseBdev4", 00:16:44.447 "aliases": [ 00:16:44.447 "baaefd3f-a650-4948-9fec-0279bf3d215e" 00:16:44.447 ], 00:16:44.447 "product_name": "Malloc disk", 00:16:44.447 "block_size": 512, 00:16:44.447 "num_blocks": 65536, 00:16:44.447 "uuid": "baaefd3f-a650-4948-9fec-0279bf3d215e", 00:16:44.447 "assigned_rate_limits": { 00:16:44.447 "rw_ios_per_sec": 0, 00:16:44.447 "rw_mbytes_per_sec": 0, 00:16:44.447 "r_mbytes_per_sec": 0, 00:16:44.447 "w_mbytes_per_sec": 0 00:16:44.447 }, 00:16:44.447 "claimed": true, 00:16:44.447 "claim_type": "exclusive_write", 00:16:44.447 "zoned": false, 00:16:44.447 "supported_io_types": { 00:16:44.447 "read": true, 00:16:44.447 "write": true, 00:16:44.447 "unmap": true, 00:16:44.447 "flush": true, 00:16:44.447 "reset": true, 00:16:44.447 "nvme_admin": false, 00:16:44.447 "nvme_io": false, 00:16:44.447 "nvme_io_md": false, 00:16:44.447 "write_zeroes": true, 00:16:44.447 "zcopy": true, 00:16:44.447 "get_zone_info": false, 00:16:44.447 "zone_management": false, 00:16:44.447 "zone_append": false, 00:16:44.447 "compare": false, 00:16:44.447 "compare_and_write": false, 00:16:44.447 "abort": true, 00:16:44.447 "seek_hole": false, 00:16:44.447 "seek_data": false, 00:16:44.447 "copy": true, 00:16:44.447 "nvme_iov_md": false 00:16:44.447 }, 00:16:44.447 "memory_domains": [ 00:16:44.447 { 00:16:44.447 "dma_device_id": "system", 00:16:44.447 "dma_device_type": 1 00:16:44.447 }, 00:16:44.447 { 00:16:44.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.447 "dma_device_type": 2 00:16:44.447 } 00:16:44.447 ], 00:16:44.447 "driver_specific": {} 00:16:44.447 } 00:16:44.447 ] 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.447 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.448 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.448 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.448 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.448 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.448 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.448 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.448 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.448 "name": "Existed_Raid", 00:16:44.448 "uuid": "1f9d2828-55b7-4166-825c-1e37e0284a48", 00:16:44.448 "strip_size_kb": 64, 00:16:44.448 "state": "online", 00:16:44.448 "raid_level": "raid5f", 00:16:44.448 "superblock": true, 00:16:44.448 "num_base_bdevs": 4, 00:16:44.448 "num_base_bdevs_discovered": 4, 00:16:44.448 "num_base_bdevs_operational": 4, 00:16:44.448 "base_bdevs_list": [ 00:16:44.448 { 00:16:44.448 "name": "BaseBdev1", 00:16:44.448 "uuid": "6e6a0961-eb54-436a-ae4e-3d322b143c7b", 00:16:44.448 "is_configured": true, 00:16:44.448 "data_offset": 2048, 00:16:44.448 "data_size": 63488 00:16:44.448 }, 00:16:44.448 { 00:16:44.448 "name": "BaseBdev2", 00:16:44.448 "uuid": "6c89dc36-f922-474c-b52f-c3a9533ba2a6", 00:16:44.448 "is_configured": true, 00:16:44.448 "data_offset": 2048, 00:16:44.448 "data_size": 63488 00:16:44.448 }, 00:16:44.448 { 00:16:44.448 "name": "BaseBdev3", 00:16:44.448 "uuid": "371345f9-5356-40e1-8477-1442433dd3a4", 00:16:44.448 "is_configured": true, 00:16:44.448 "data_offset": 2048, 00:16:44.448 "data_size": 63488 00:16:44.448 }, 00:16:44.448 { 00:16:44.448 "name": "BaseBdev4", 00:16:44.448 "uuid": "baaefd3f-a650-4948-9fec-0279bf3d215e", 00:16:44.448 "is_configured": true, 00:16:44.448 "data_offset": 2048, 00:16:44.448 "data_size": 63488 00:16:44.448 } 00:16:44.448 ] 00:16:44.448 }' 00:16:44.448 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.448 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.707 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:44.707 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:44.707 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:44.707 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:44.707 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:44.707 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:44.707 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:44.707 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:44.707 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.707 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.707 [2024-11-26 18:01:26.563950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.966 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.966 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:44.966 "name": "Existed_Raid", 00:16:44.966 "aliases": [ 00:16:44.966 "1f9d2828-55b7-4166-825c-1e37e0284a48" 00:16:44.966 ], 00:16:44.966 "product_name": "Raid Volume", 00:16:44.966 "block_size": 512, 00:16:44.966 "num_blocks": 190464, 00:16:44.966 "uuid": "1f9d2828-55b7-4166-825c-1e37e0284a48", 00:16:44.966 "assigned_rate_limits": { 00:16:44.966 "rw_ios_per_sec": 0, 00:16:44.966 "rw_mbytes_per_sec": 0, 00:16:44.966 "r_mbytes_per_sec": 0, 00:16:44.966 "w_mbytes_per_sec": 0 00:16:44.966 }, 00:16:44.966 "claimed": false, 00:16:44.966 "zoned": false, 00:16:44.966 "supported_io_types": { 00:16:44.966 "read": true, 00:16:44.966 "write": true, 00:16:44.966 "unmap": false, 00:16:44.966 "flush": false, 00:16:44.966 "reset": true, 00:16:44.966 "nvme_admin": false, 00:16:44.966 "nvme_io": false, 00:16:44.966 "nvme_io_md": false, 00:16:44.966 "write_zeroes": true, 00:16:44.966 "zcopy": false, 00:16:44.966 "get_zone_info": false, 00:16:44.966 "zone_management": false, 00:16:44.966 "zone_append": false, 00:16:44.966 "compare": false, 00:16:44.966 "compare_and_write": false, 00:16:44.966 "abort": false, 00:16:44.966 "seek_hole": false, 00:16:44.966 "seek_data": false, 00:16:44.966 "copy": false, 00:16:44.966 "nvme_iov_md": false 00:16:44.966 }, 00:16:44.966 "driver_specific": { 00:16:44.966 "raid": { 00:16:44.966 "uuid": "1f9d2828-55b7-4166-825c-1e37e0284a48", 00:16:44.966 "strip_size_kb": 64, 00:16:44.966 "state": "online", 00:16:44.966 "raid_level": "raid5f", 00:16:44.966 "superblock": true, 00:16:44.966 "num_base_bdevs": 4, 00:16:44.966 "num_base_bdevs_discovered": 4, 00:16:44.966 "num_base_bdevs_operational": 4, 00:16:44.966 "base_bdevs_list": [ 00:16:44.966 { 00:16:44.966 "name": "BaseBdev1", 00:16:44.966 "uuid": "6e6a0961-eb54-436a-ae4e-3d322b143c7b", 00:16:44.966 "is_configured": true, 00:16:44.966 "data_offset": 2048, 00:16:44.966 "data_size": 63488 00:16:44.966 }, 00:16:44.966 { 00:16:44.966 "name": "BaseBdev2", 00:16:44.967 "uuid": "6c89dc36-f922-474c-b52f-c3a9533ba2a6", 00:16:44.967 "is_configured": true, 00:16:44.967 "data_offset": 2048, 00:16:44.967 "data_size": 63488 00:16:44.967 }, 00:16:44.967 { 00:16:44.967 "name": "BaseBdev3", 00:16:44.967 "uuid": "371345f9-5356-40e1-8477-1442433dd3a4", 00:16:44.967 "is_configured": true, 00:16:44.967 "data_offset": 2048, 00:16:44.967 "data_size": 63488 00:16:44.967 }, 00:16:44.967 { 00:16:44.967 "name": "BaseBdev4", 00:16:44.967 "uuid": "baaefd3f-a650-4948-9fec-0279bf3d215e", 00:16:44.967 "is_configured": true, 00:16:44.967 "data_offset": 2048, 00:16:44.967 "data_size": 63488 00:16:44.967 } 00:16:44.967 ] 00:16:44.967 } 00:16:44.967 } 00:16:44.967 }' 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:44.967 BaseBdev2 00:16:44.967 BaseBdev3 00:16:44.967 BaseBdev4' 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.967 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.226 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.226 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.226 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.226 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:45.226 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.226 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.226 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.226 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.226 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.226 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.226 18:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:45.226 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.226 18:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.226 [2024-11-26 18:01:26.899188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.226 "name": "Existed_Raid", 00:16:45.226 "uuid": "1f9d2828-55b7-4166-825c-1e37e0284a48", 00:16:45.226 "strip_size_kb": 64, 00:16:45.226 "state": "online", 00:16:45.226 "raid_level": "raid5f", 00:16:45.226 "superblock": true, 00:16:45.226 "num_base_bdevs": 4, 00:16:45.226 "num_base_bdevs_discovered": 3, 00:16:45.226 "num_base_bdevs_operational": 3, 00:16:45.226 "base_bdevs_list": [ 00:16:45.226 { 00:16:45.226 "name": null, 00:16:45.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.226 "is_configured": false, 00:16:45.226 "data_offset": 0, 00:16:45.226 "data_size": 63488 00:16:45.226 }, 00:16:45.226 { 00:16:45.226 "name": "BaseBdev2", 00:16:45.226 "uuid": "6c89dc36-f922-474c-b52f-c3a9533ba2a6", 00:16:45.226 "is_configured": true, 00:16:45.226 "data_offset": 2048, 00:16:45.226 "data_size": 63488 00:16:45.226 }, 00:16:45.226 { 00:16:45.226 "name": "BaseBdev3", 00:16:45.226 "uuid": "371345f9-5356-40e1-8477-1442433dd3a4", 00:16:45.226 "is_configured": true, 00:16:45.226 "data_offset": 2048, 00:16:45.226 "data_size": 63488 00:16:45.226 }, 00:16:45.226 { 00:16:45.226 "name": "BaseBdev4", 00:16:45.226 "uuid": "baaefd3f-a650-4948-9fec-0279bf3d215e", 00:16:45.226 "is_configured": true, 00:16:45.226 "data_offset": 2048, 00:16:45.226 "data_size": 63488 00:16:45.226 } 00:16:45.226 ] 00:16:45.226 }' 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.226 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.794 [2024-11-26 18:01:27.533558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:45.794 [2024-11-26 18:01:27.533893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.794 [2024-11-26 18:01:27.646353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.794 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.102 [2024-11-26 18:01:27.706290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.102 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.102 [2024-11-26 18:01:27.869795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:46.102 [2024-11-26 18:01:27.869983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:46.360 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.360 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:46.360 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:46.360 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.360 18:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:46.360 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.360 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.360 18:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.360 BaseBdev2 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.360 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.360 [ 00:16:46.360 { 00:16:46.360 "name": "BaseBdev2", 00:16:46.360 "aliases": [ 00:16:46.360 "4422021f-2aff-4cd7-9c80-a8fa55ca0924" 00:16:46.360 ], 00:16:46.360 "product_name": "Malloc disk", 00:16:46.360 "block_size": 512, 00:16:46.360 "num_blocks": 65536, 00:16:46.360 "uuid": "4422021f-2aff-4cd7-9c80-a8fa55ca0924", 00:16:46.360 "assigned_rate_limits": { 00:16:46.360 "rw_ios_per_sec": 0, 00:16:46.360 "rw_mbytes_per_sec": 0, 00:16:46.360 "r_mbytes_per_sec": 0, 00:16:46.360 "w_mbytes_per_sec": 0 00:16:46.360 }, 00:16:46.360 "claimed": false, 00:16:46.360 "zoned": false, 00:16:46.360 "supported_io_types": { 00:16:46.360 "read": true, 00:16:46.360 "write": true, 00:16:46.360 "unmap": true, 00:16:46.360 "flush": true, 00:16:46.361 "reset": true, 00:16:46.361 "nvme_admin": false, 00:16:46.361 "nvme_io": false, 00:16:46.361 "nvme_io_md": false, 00:16:46.361 "write_zeroes": true, 00:16:46.361 "zcopy": true, 00:16:46.361 "get_zone_info": false, 00:16:46.361 "zone_management": false, 00:16:46.361 "zone_append": false, 00:16:46.361 "compare": false, 00:16:46.361 "compare_and_write": false, 00:16:46.361 "abort": true, 00:16:46.361 "seek_hole": false, 00:16:46.361 "seek_data": false, 00:16:46.361 "copy": true, 00:16:46.361 "nvme_iov_md": false 00:16:46.361 }, 00:16:46.361 "memory_domains": [ 00:16:46.361 { 00:16:46.361 "dma_device_id": "system", 00:16:46.361 "dma_device_type": 1 00:16:46.361 }, 00:16:46.361 { 00:16:46.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.361 "dma_device_type": 2 00:16:46.361 } 00:16:46.361 ], 00:16:46.361 "driver_specific": {} 00:16:46.361 } 00:16:46.361 ] 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.361 BaseBdev3 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.361 [ 00:16:46.361 { 00:16:46.361 "name": "BaseBdev3", 00:16:46.361 "aliases": [ 00:16:46.361 "cbcf7998-02c2-4e78-a522-28d941316bb2" 00:16:46.361 ], 00:16:46.361 "product_name": "Malloc disk", 00:16:46.361 "block_size": 512, 00:16:46.361 "num_blocks": 65536, 00:16:46.361 "uuid": "cbcf7998-02c2-4e78-a522-28d941316bb2", 00:16:46.361 "assigned_rate_limits": { 00:16:46.361 "rw_ios_per_sec": 0, 00:16:46.361 "rw_mbytes_per_sec": 0, 00:16:46.361 "r_mbytes_per_sec": 0, 00:16:46.361 "w_mbytes_per_sec": 0 00:16:46.361 }, 00:16:46.361 "claimed": false, 00:16:46.361 "zoned": false, 00:16:46.361 "supported_io_types": { 00:16:46.361 "read": true, 00:16:46.361 "write": true, 00:16:46.361 "unmap": true, 00:16:46.361 "flush": true, 00:16:46.361 "reset": true, 00:16:46.361 "nvme_admin": false, 00:16:46.361 "nvme_io": false, 00:16:46.361 "nvme_io_md": false, 00:16:46.361 "write_zeroes": true, 00:16:46.361 "zcopy": true, 00:16:46.361 "get_zone_info": false, 00:16:46.361 "zone_management": false, 00:16:46.361 "zone_append": false, 00:16:46.361 "compare": false, 00:16:46.361 "compare_and_write": false, 00:16:46.361 "abort": true, 00:16:46.361 "seek_hole": false, 00:16:46.361 "seek_data": false, 00:16:46.361 "copy": true, 00:16:46.361 "nvme_iov_md": false 00:16:46.361 }, 00:16:46.361 "memory_domains": [ 00:16:46.361 { 00:16:46.361 "dma_device_id": "system", 00:16:46.361 "dma_device_type": 1 00:16:46.361 }, 00:16:46.361 { 00:16:46.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.361 "dma_device_type": 2 00:16:46.361 } 00:16:46.361 ], 00:16:46.361 "driver_specific": {} 00:16:46.361 } 00:16:46.361 ] 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.361 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.621 BaseBdev4 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.621 [ 00:16:46.621 { 00:16:46.621 "name": "BaseBdev4", 00:16:46.621 "aliases": [ 00:16:46.621 "3987ee87-554f-4b5d-aa34-ae4728bc9b30" 00:16:46.621 ], 00:16:46.621 "product_name": "Malloc disk", 00:16:46.621 "block_size": 512, 00:16:46.621 "num_blocks": 65536, 00:16:46.621 "uuid": "3987ee87-554f-4b5d-aa34-ae4728bc9b30", 00:16:46.621 "assigned_rate_limits": { 00:16:46.621 "rw_ios_per_sec": 0, 00:16:46.621 "rw_mbytes_per_sec": 0, 00:16:46.621 "r_mbytes_per_sec": 0, 00:16:46.621 "w_mbytes_per_sec": 0 00:16:46.621 }, 00:16:46.621 "claimed": false, 00:16:46.621 "zoned": false, 00:16:46.621 "supported_io_types": { 00:16:46.621 "read": true, 00:16:46.621 "write": true, 00:16:46.621 "unmap": true, 00:16:46.621 "flush": true, 00:16:46.621 "reset": true, 00:16:46.621 "nvme_admin": false, 00:16:46.621 "nvme_io": false, 00:16:46.621 "nvme_io_md": false, 00:16:46.621 "write_zeroes": true, 00:16:46.621 "zcopy": true, 00:16:46.621 "get_zone_info": false, 00:16:46.621 "zone_management": false, 00:16:46.621 "zone_append": false, 00:16:46.621 "compare": false, 00:16:46.621 "compare_and_write": false, 00:16:46.621 "abort": true, 00:16:46.621 "seek_hole": false, 00:16:46.621 "seek_data": false, 00:16:46.621 "copy": true, 00:16:46.621 "nvme_iov_md": false 00:16:46.621 }, 00:16:46.621 "memory_domains": [ 00:16:46.621 { 00:16:46.621 "dma_device_id": "system", 00:16:46.621 "dma_device_type": 1 00:16:46.621 }, 00:16:46.621 { 00:16:46.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.621 "dma_device_type": 2 00:16:46.621 } 00:16:46.621 ], 00:16:46.621 "driver_specific": {} 00:16:46.621 } 00:16:46.621 ] 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.621 [2024-11-26 18:01:28.298419] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.621 [2024-11-26 18:01:28.298577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.621 [2024-11-26 18:01:28.298645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.621 [2024-11-26 18:01:28.300959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.621 [2024-11-26 18:01:28.301096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.621 "name": "Existed_Raid", 00:16:46.621 "uuid": "7b4f502c-ba43-4efb-9235-2ce054cefb93", 00:16:46.621 "strip_size_kb": 64, 00:16:46.621 "state": "configuring", 00:16:46.621 "raid_level": "raid5f", 00:16:46.621 "superblock": true, 00:16:46.621 "num_base_bdevs": 4, 00:16:46.621 "num_base_bdevs_discovered": 3, 00:16:46.621 "num_base_bdevs_operational": 4, 00:16:46.621 "base_bdevs_list": [ 00:16:46.621 { 00:16:46.621 "name": "BaseBdev1", 00:16:46.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.621 "is_configured": false, 00:16:46.621 "data_offset": 0, 00:16:46.621 "data_size": 0 00:16:46.621 }, 00:16:46.621 { 00:16:46.621 "name": "BaseBdev2", 00:16:46.621 "uuid": "4422021f-2aff-4cd7-9c80-a8fa55ca0924", 00:16:46.621 "is_configured": true, 00:16:46.621 "data_offset": 2048, 00:16:46.621 "data_size": 63488 00:16:46.621 }, 00:16:46.621 { 00:16:46.621 "name": "BaseBdev3", 00:16:46.621 "uuid": "cbcf7998-02c2-4e78-a522-28d941316bb2", 00:16:46.621 "is_configured": true, 00:16:46.621 "data_offset": 2048, 00:16:46.621 "data_size": 63488 00:16:46.621 }, 00:16:46.621 { 00:16:46.621 "name": "BaseBdev4", 00:16:46.621 "uuid": "3987ee87-554f-4b5d-aa34-ae4728bc9b30", 00:16:46.621 "is_configured": true, 00:16:46.621 "data_offset": 2048, 00:16:46.621 "data_size": 63488 00:16:46.621 } 00:16:46.621 ] 00:16:46.621 }' 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.621 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.189 [2024-11-26 18:01:28.789615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.189 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.190 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.190 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.190 "name": "Existed_Raid", 00:16:47.190 "uuid": "7b4f502c-ba43-4efb-9235-2ce054cefb93", 00:16:47.190 "strip_size_kb": 64, 00:16:47.190 "state": "configuring", 00:16:47.190 "raid_level": "raid5f", 00:16:47.190 "superblock": true, 00:16:47.190 "num_base_bdevs": 4, 00:16:47.190 "num_base_bdevs_discovered": 2, 00:16:47.190 "num_base_bdevs_operational": 4, 00:16:47.190 "base_bdevs_list": [ 00:16:47.190 { 00:16:47.190 "name": "BaseBdev1", 00:16:47.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.190 "is_configured": false, 00:16:47.190 "data_offset": 0, 00:16:47.190 "data_size": 0 00:16:47.190 }, 00:16:47.190 { 00:16:47.190 "name": null, 00:16:47.190 "uuid": "4422021f-2aff-4cd7-9c80-a8fa55ca0924", 00:16:47.190 "is_configured": false, 00:16:47.190 "data_offset": 0, 00:16:47.190 "data_size": 63488 00:16:47.190 }, 00:16:47.190 { 00:16:47.190 "name": "BaseBdev3", 00:16:47.190 "uuid": "cbcf7998-02c2-4e78-a522-28d941316bb2", 00:16:47.190 "is_configured": true, 00:16:47.190 "data_offset": 2048, 00:16:47.190 "data_size": 63488 00:16:47.190 }, 00:16:47.190 { 00:16:47.190 "name": "BaseBdev4", 00:16:47.190 "uuid": "3987ee87-554f-4b5d-aa34-ae4728bc9b30", 00:16:47.190 "is_configured": true, 00:16:47.190 "data_offset": 2048, 00:16:47.190 "data_size": 63488 00:16:47.190 } 00:16:47.190 ] 00:16:47.190 }' 00:16:47.190 18:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.190 18:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.448 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.448 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.448 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.448 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:47.448 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.448 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:47.448 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:47.448 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.448 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.708 [2024-11-26 18:01:29.317425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.708 BaseBdev1 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.708 [ 00:16:47.708 { 00:16:47.708 "name": "BaseBdev1", 00:16:47.708 "aliases": [ 00:16:47.708 "07fb60c6-5291-4c2c-b9ce-057449fdadb3" 00:16:47.708 ], 00:16:47.708 "product_name": "Malloc disk", 00:16:47.708 "block_size": 512, 00:16:47.708 "num_blocks": 65536, 00:16:47.708 "uuid": "07fb60c6-5291-4c2c-b9ce-057449fdadb3", 00:16:47.708 "assigned_rate_limits": { 00:16:47.708 "rw_ios_per_sec": 0, 00:16:47.708 "rw_mbytes_per_sec": 0, 00:16:47.708 "r_mbytes_per_sec": 0, 00:16:47.708 "w_mbytes_per_sec": 0 00:16:47.708 }, 00:16:47.708 "claimed": true, 00:16:47.708 "claim_type": "exclusive_write", 00:16:47.708 "zoned": false, 00:16:47.708 "supported_io_types": { 00:16:47.708 "read": true, 00:16:47.708 "write": true, 00:16:47.708 "unmap": true, 00:16:47.708 "flush": true, 00:16:47.708 "reset": true, 00:16:47.708 "nvme_admin": false, 00:16:47.708 "nvme_io": false, 00:16:47.708 "nvme_io_md": false, 00:16:47.708 "write_zeroes": true, 00:16:47.708 "zcopy": true, 00:16:47.708 "get_zone_info": false, 00:16:47.708 "zone_management": false, 00:16:47.708 "zone_append": false, 00:16:47.708 "compare": false, 00:16:47.708 "compare_and_write": false, 00:16:47.708 "abort": true, 00:16:47.708 "seek_hole": false, 00:16:47.708 "seek_data": false, 00:16:47.708 "copy": true, 00:16:47.708 "nvme_iov_md": false 00:16:47.708 }, 00:16:47.708 "memory_domains": [ 00:16:47.708 { 00:16:47.708 "dma_device_id": "system", 00:16:47.708 "dma_device_type": 1 00:16:47.708 }, 00:16:47.708 { 00:16:47.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.708 "dma_device_type": 2 00:16:47.708 } 00:16:47.708 ], 00:16:47.708 "driver_specific": {} 00:16:47.708 } 00:16:47.708 ] 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.708 "name": "Existed_Raid", 00:16:47.708 "uuid": "7b4f502c-ba43-4efb-9235-2ce054cefb93", 00:16:47.708 "strip_size_kb": 64, 00:16:47.708 "state": "configuring", 00:16:47.708 "raid_level": "raid5f", 00:16:47.708 "superblock": true, 00:16:47.708 "num_base_bdevs": 4, 00:16:47.708 "num_base_bdevs_discovered": 3, 00:16:47.708 "num_base_bdevs_operational": 4, 00:16:47.708 "base_bdevs_list": [ 00:16:47.708 { 00:16:47.708 "name": "BaseBdev1", 00:16:47.708 "uuid": "07fb60c6-5291-4c2c-b9ce-057449fdadb3", 00:16:47.708 "is_configured": true, 00:16:47.708 "data_offset": 2048, 00:16:47.708 "data_size": 63488 00:16:47.708 }, 00:16:47.708 { 00:16:47.708 "name": null, 00:16:47.708 "uuid": "4422021f-2aff-4cd7-9c80-a8fa55ca0924", 00:16:47.708 "is_configured": false, 00:16:47.708 "data_offset": 0, 00:16:47.708 "data_size": 63488 00:16:47.708 }, 00:16:47.708 { 00:16:47.708 "name": "BaseBdev3", 00:16:47.708 "uuid": "cbcf7998-02c2-4e78-a522-28d941316bb2", 00:16:47.708 "is_configured": true, 00:16:47.708 "data_offset": 2048, 00:16:47.708 "data_size": 63488 00:16:47.708 }, 00:16:47.708 { 00:16:47.708 "name": "BaseBdev4", 00:16:47.708 "uuid": "3987ee87-554f-4b5d-aa34-ae4728bc9b30", 00:16:47.708 "is_configured": true, 00:16:47.708 "data_offset": 2048, 00:16:47.708 "data_size": 63488 00:16:47.708 } 00:16:47.708 ] 00:16:47.708 }' 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.708 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.966 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.966 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.966 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:47.966 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.966 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.222 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:48.222 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:48.222 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.222 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.222 [2024-11-26 18:01:29.860655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:48.222 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.222 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:48.222 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.223 "name": "Existed_Raid", 00:16:48.223 "uuid": "7b4f502c-ba43-4efb-9235-2ce054cefb93", 00:16:48.223 "strip_size_kb": 64, 00:16:48.223 "state": "configuring", 00:16:48.223 "raid_level": "raid5f", 00:16:48.223 "superblock": true, 00:16:48.223 "num_base_bdevs": 4, 00:16:48.223 "num_base_bdevs_discovered": 2, 00:16:48.223 "num_base_bdevs_operational": 4, 00:16:48.223 "base_bdevs_list": [ 00:16:48.223 { 00:16:48.223 "name": "BaseBdev1", 00:16:48.223 "uuid": "07fb60c6-5291-4c2c-b9ce-057449fdadb3", 00:16:48.223 "is_configured": true, 00:16:48.223 "data_offset": 2048, 00:16:48.223 "data_size": 63488 00:16:48.223 }, 00:16:48.223 { 00:16:48.223 "name": null, 00:16:48.223 "uuid": "4422021f-2aff-4cd7-9c80-a8fa55ca0924", 00:16:48.223 "is_configured": false, 00:16:48.223 "data_offset": 0, 00:16:48.223 "data_size": 63488 00:16:48.223 }, 00:16:48.223 { 00:16:48.223 "name": null, 00:16:48.223 "uuid": "cbcf7998-02c2-4e78-a522-28d941316bb2", 00:16:48.223 "is_configured": false, 00:16:48.223 "data_offset": 0, 00:16:48.223 "data_size": 63488 00:16:48.223 }, 00:16:48.223 { 00:16:48.223 "name": "BaseBdev4", 00:16:48.223 "uuid": "3987ee87-554f-4b5d-aa34-ae4728bc9b30", 00:16:48.223 "is_configured": true, 00:16:48.223 "data_offset": 2048, 00:16:48.223 "data_size": 63488 00:16:48.223 } 00:16:48.223 ] 00:16:48.223 }' 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.223 18:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.497 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.497 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:48.497 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.497 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.497 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.498 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:48.498 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:48.498 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.498 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.498 [2024-11-26 18:01:30.347840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:48.498 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.498 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:48.498 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.498 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.498 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.498 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.498 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.498 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.498 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.498 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.498 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.759 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.759 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.759 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.759 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.759 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.759 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.759 "name": "Existed_Raid", 00:16:48.759 "uuid": "7b4f502c-ba43-4efb-9235-2ce054cefb93", 00:16:48.759 "strip_size_kb": 64, 00:16:48.759 "state": "configuring", 00:16:48.759 "raid_level": "raid5f", 00:16:48.759 "superblock": true, 00:16:48.759 "num_base_bdevs": 4, 00:16:48.759 "num_base_bdevs_discovered": 3, 00:16:48.759 "num_base_bdevs_operational": 4, 00:16:48.759 "base_bdevs_list": [ 00:16:48.759 { 00:16:48.759 "name": "BaseBdev1", 00:16:48.759 "uuid": "07fb60c6-5291-4c2c-b9ce-057449fdadb3", 00:16:48.759 "is_configured": true, 00:16:48.759 "data_offset": 2048, 00:16:48.759 "data_size": 63488 00:16:48.759 }, 00:16:48.759 { 00:16:48.759 "name": null, 00:16:48.759 "uuid": "4422021f-2aff-4cd7-9c80-a8fa55ca0924", 00:16:48.759 "is_configured": false, 00:16:48.759 "data_offset": 0, 00:16:48.759 "data_size": 63488 00:16:48.759 }, 00:16:48.759 { 00:16:48.759 "name": "BaseBdev3", 00:16:48.759 "uuid": "cbcf7998-02c2-4e78-a522-28d941316bb2", 00:16:48.759 "is_configured": true, 00:16:48.759 "data_offset": 2048, 00:16:48.759 "data_size": 63488 00:16:48.759 }, 00:16:48.759 { 00:16:48.759 "name": "BaseBdev4", 00:16:48.759 "uuid": "3987ee87-554f-4b5d-aa34-ae4728bc9b30", 00:16:48.759 "is_configured": true, 00:16:48.759 "data_offset": 2048, 00:16:48.759 "data_size": 63488 00:16:48.759 } 00:16:48.759 ] 00:16:48.759 }' 00:16:48.759 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.759 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.017 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:49.017 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.017 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.017 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.017 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.017 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:49.017 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:49.017 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.017 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.017 [2024-11-26 18:01:30.859044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:49.276 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.276 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:49.276 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.276 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.276 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.276 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.276 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.276 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.276 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.276 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.276 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.276 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.276 18:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.276 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.276 18:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.276 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.276 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.276 "name": "Existed_Raid", 00:16:49.276 "uuid": "7b4f502c-ba43-4efb-9235-2ce054cefb93", 00:16:49.276 "strip_size_kb": 64, 00:16:49.276 "state": "configuring", 00:16:49.276 "raid_level": "raid5f", 00:16:49.276 "superblock": true, 00:16:49.276 "num_base_bdevs": 4, 00:16:49.276 "num_base_bdevs_discovered": 2, 00:16:49.276 "num_base_bdevs_operational": 4, 00:16:49.276 "base_bdevs_list": [ 00:16:49.276 { 00:16:49.276 "name": null, 00:16:49.276 "uuid": "07fb60c6-5291-4c2c-b9ce-057449fdadb3", 00:16:49.276 "is_configured": false, 00:16:49.276 "data_offset": 0, 00:16:49.276 "data_size": 63488 00:16:49.276 }, 00:16:49.276 { 00:16:49.276 "name": null, 00:16:49.276 "uuid": "4422021f-2aff-4cd7-9c80-a8fa55ca0924", 00:16:49.276 "is_configured": false, 00:16:49.276 "data_offset": 0, 00:16:49.276 "data_size": 63488 00:16:49.276 }, 00:16:49.276 { 00:16:49.276 "name": "BaseBdev3", 00:16:49.276 "uuid": "cbcf7998-02c2-4e78-a522-28d941316bb2", 00:16:49.276 "is_configured": true, 00:16:49.276 "data_offset": 2048, 00:16:49.276 "data_size": 63488 00:16:49.276 }, 00:16:49.276 { 00:16:49.276 "name": "BaseBdev4", 00:16:49.276 "uuid": "3987ee87-554f-4b5d-aa34-ae4728bc9b30", 00:16:49.276 "is_configured": true, 00:16:49.276 "data_offset": 2048, 00:16:49.276 "data_size": 63488 00:16:49.276 } 00:16:49.276 ] 00:16:49.276 }' 00:16:49.276 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.276 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.844 [2024-11-26 18:01:31.466553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.844 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.845 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.845 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.845 "name": "Existed_Raid", 00:16:49.845 "uuid": "7b4f502c-ba43-4efb-9235-2ce054cefb93", 00:16:49.845 "strip_size_kb": 64, 00:16:49.845 "state": "configuring", 00:16:49.845 "raid_level": "raid5f", 00:16:49.845 "superblock": true, 00:16:49.845 "num_base_bdevs": 4, 00:16:49.845 "num_base_bdevs_discovered": 3, 00:16:49.845 "num_base_bdevs_operational": 4, 00:16:49.845 "base_bdevs_list": [ 00:16:49.845 { 00:16:49.845 "name": null, 00:16:49.845 "uuid": "07fb60c6-5291-4c2c-b9ce-057449fdadb3", 00:16:49.845 "is_configured": false, 00:16:49.845 "data_offset": 0, 00:16:49.845 "data_size": 63488 00:16:49.845 }, 00:16:49.845 { 00:16:49.845 "name": "BaseBdev2", 00:16:49.845 "uuid": "4422021f-2aff-4cd7-9c80-a8fa55ca0924", 00:16:49.845 "is_configured": true, 00:16:49.845 "data_offset": 2048, 00:16:49.845 "data_size": 63488 00:16:49.845 }, 00:16:49.845 { 00:16:49.845 "name": "BaseBdev3", 00:16:49.845 "uuid": "cbcf7998-02c2-4e78-a522-28d941316bb2", 00:16:49.845 "is_configured": true, 00:16:49.845 "data_offset": 2048, 00:16:49.845 "data_size": 63488 00:16:49.845 }, 00:16:49.845 { 00:16:49.845 "name": "BaseBdev4", 00:16:49.845 "uuid": "3987ee87-554f-4b5d-aa34-ae4728bc9b30", 00:16:49.845 "is_configured": true, 00:16:49.845 "data_offset": 2048, 00:16:49.845 "data_size": 63488 00:16:49.845 } 00:16:49.845 ] 00:16:49.845 }' 00:16:49.845 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.845 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.103 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.103 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:50.103 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.103 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.103 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.103 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:50.103 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.103 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.103 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.103 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:50.103 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.362 18:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 07fb60c6-5291-4c2c-b9ce-057449fdadb3 00:16:50.362 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.362 18:01:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.362 [2024-11-26 18:01:32.019649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:50.362 [2024-11-26 18:01:32.020071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:50.362 [2024-11-26 18:01:32.020137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:50.362 [2024-11-26 18:01:32.020490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:50.362 NewBaseBdev 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.362 [2024-11-26 18:01:32.028306] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:50.362 [2024-11-26 18:01:32.028389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:50.362 [2024-11-26 18:01:32.028738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.362 [ 00:16:50.362 { 00:16:50.362 "name": "NewBaseBdev", 00:16:50.362 "aliases": [ 00:16:50.362 "07fb60c6-5291-4c2c-b9ce-057449fdadb3" 00:16:50.362 ], 00:16:50.362 "product_name": "Malloc disk", 00:16:50.362 "block_size": 512, 00:16:50.362 "num_blocks": 65536, 00:16:50.362 "uuid": "07fb60c6-5291-4c2c-b9ce-057449fdadb3", 00:16:50.362 "assigned_rate_limits": { 00:16:50.362 "rw_ios_per_sec": 0, 00:16:50.362 "rw_mbytes_per_sec": 0, 00:16:50.362 "r_mbytes_per_sec": 0, 00:16:50.362 "w_mbytes_per_sec": 0 00:16:50.362 }, 00:16:50.362 "claimed": true, 00:16:50.362 "claim_type": "exclusive_write", 00:16:50.362 "zoned": false, 00:16:50.362 "supported_io_types": { 00:16:50.362 "read": true, 00:16:50.362 "write": true, 00:16:50.362 "unmap": true, 00:16:50.362 "flush": true, 00:16:50.362 "reset": true, 00:16:50.362 "nvme_admin": false, 00:16:50.362 "nvme_io": false, 00:16:50.362 "nvme_io_md": false, 00:16:50.362 "write_zeroes": true, 00:16:50.362 "zcopy": true, 00:16:50.362 "get_zone_info": false, 00:16:50.362 "zone_management": false, 00:16:50.362 "zone_append": false, 00:16:50.362 "compare": false, 00:16:50.362 "compare_and_write": false, 00:16:50.362 "abort": true, 00:16:50.362 "seek_hole": false, 00:16:50.362 "seek_data": false, 00:16:50.362 "copy": true, 00:16:50.362 "nvme_iov_md": false 00:16:50.362 }, 00:16:50.362 "memory_domains": [ 00:16:50.362 { 00:16:50.362 "dma_device_id": "system", 00:16:50.362 "dma_device_type": 1 00:16:50.362 }, 00:16:50.362 { 00:16:50.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.362 "dma_device_type": 2 00:16:50.362 } 00:16:50.362 ], 00:16:50.362 "driver_specific": {} 00:16:50.362 } 00:16:50.362 ] 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.362 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.362 "name": "Existed_Raid", 00:16:50.362 "uuid": "7b4f502c-ba43-4efb-9235-2ce054cefb93", 00:16:50.362 "strip_size_kb": 64, 00:16:50.362 "state": "online", 00:16:50.362 "raid_level": "raid5f", 00:16:50.362 "superblock": true, 00:16:50.362 "num_base_bdevs": 4, 00:16:50.362 "num_base_bdevs_discovered": 4, 00:16:50.362 "num_base_bdevs_operational": 4, 00:16:50.362 "base_bdevs_list": [ 00:16:50.362 { 00:16:50.362 "name": "NewBaseBdev", 00:16:50.362 "uuid": "07fb60c6-5291-4c2c-b9ce-057449fdadb3", 00:16:50.362 "is_configured": true, 00:16:50.362 "data_offset": 2048, 00:16:50.363 "data_size": 63488 00:16:50.363 }, 00:16:50.363 { 00:16:50.363 "name": "BaseBdev2", 00:16:50.363 "uuid": "4422021f-2aff-4cd7-9c80-a8fa55ca0924", 00:16:50.363 "is_configured": true, 00:16:50.363 "data_offset": 2048, 00:16:50.363 "data_size": 63488 00:16:50.363 }, 00:16:50.363 { 00:16:50.363 "name": "BaseBdev3", 00:16:50.363 "uuid": "cbcf7998-02c2-4e78-a522-28d941316bb2", 00:16:50.363 "is_configured": true, 00:16:50.363 "data_offset": 2048, 00:16:50.363 "data_size": 63488 00:16:50.363 }, 00:16:50.363 { 00:16:50.363 "name": "BaseBdev4", 00:16:50.363 "uuid": "3987ee87-554f-4b5d-aa34-ae4728bc9b30", 00:16:50.363 "is_configured": true, 00:16:50.363 "data_offset": 2048, 00:16:50.363 "data_size": 63488 00:16:50.363 } 00:16:50.363 ] 00:16:50.363 }' 00:16:50.363 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.363 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.930 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:50.930 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:50.930 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:50.930 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:50.930 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:50.930 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:50.930 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:50.930 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.930 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.930 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:50.930 [2024-11-26 18:01:32.546585] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.930 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.930 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:50.930 "name": "Existed_Raid", 00:16:50.930 "aliases": [ 00:16:50.930 "7b4f502c-ba43-4efb-9235-2ce054cefb93" 00:16:50.930 ], 00:16:50.930 "product_name": "Raid Volume", 00:16:50.930 "block_size": 512, 00:16:50.930 "num_blocks": 190464, 00:16:50.930 "uuid": "7b4f502c-ba43-4efb-9235-2ce054cefb93", 00:16:50.930 "assigned_rate_limits": { 00:16:50.930 "rw_ios_per_sec": 0, 00:16:50.930 "rw_mbytes_per_sec": 0, 00:16:50.930 "r_mbytes_per_sec": 0, 00:16:50.930 "w_mbytes_per_sec": 0 00:16:50.930 }, 00:16:50.931 "claimed": false, 00:16:50.931 "zoned": false, 00:16:50.931 "supported_io_types": { 00:16:50.931 "read": true, 00:16:50.931 "write": true, 00:16:50.931 "unmap": false, 00:16:50.931 "flush": false, 00:16:50.931 "reset": true, 00:16:50.931 "nvme_admin": false, 00:16:50.931 "nvme_io": false, 00:16:50.931 "nvme_io_md": false, 00:16:50.931 "write_zeroes": true, 00:16:50.931 "zcopy": false, 00:16:50.931 "get_zone_info": false, 00:16:50.931 "zone_management": false, 00:16:50.931 "zone_append": false, 00:16:50.931 "compare": false, 00:16:50.931 "compare_and_write": false, 00:16:50.931 "abort": false, 00:16:50.931 "seek_hole": false, 00:16:50.931 "seek_data": false, 00:16:50.931 "copy": false, 00:16:50.931 "nvme_iov_md": false 00:16:50.931 }, 00:16:50.931 "driver_specific": { 00:16:50.931 "raid": { 00:16:50.931 "uuid": "7b4f502c-ba43-4efb-9235-2ce054cefb93", 00:16:50.931 "strip_size_kb": 64, 00:16:50.931 "state": "online", 00:16:50.931 "raid_level": "raid5f", 00:16:50.931 "superblock": true, 00:16:50.931 "num_base_bdevs": 4, 00:16:50.931 "num_base_bdevs_discovered": 4, 00:16:50.931 "num_base_bdevs_operational": 4, 00:16:50.931 "base_bdevs_list": [ 00:16:50.931 { 00:16:50.931 "name": "NewBaseBdev", 00:16:50.931 "uuid": "07fb60c6-5291-4c2c-b9ce-057449fdadb3", 00:16:50.931 "is_configured": true, 00:16:50.931 "data_offset": 2048, 00:16:50.931 "data_size": 63488 00:16:50.931 }, 00:16:50.931 { 00:16:50.931 "name": "BaseBdev2", 00:16:50.931 "uuid": "4422021f-2aff-4cd7-9c80-a8fa55ca0924", 00:16:50.931 "is_configured": true, 00:16:50.931 "data_offset": 2048, 00:16:50.931 "data_size": 63488 00:16:50.931 }, 00:16:50.931 { 00:16:50.931 "name": "BaseBdev3", 00:16:50.931 "uuid": "cbcf7998-02c2-4e78-a522-28d941316bb2", 00:16:50.931 "is_configured": true, 00:16:50.931 "data_offset": 2048, 00:16:50.931 "data_size": 63488 00:16:50.931 }, 00:16:50.931 { 00:16:50.931 "name": "BaseBdev4", 00:16:50.931 "uuid": "3987ee87-554f-4b5d-aa34-ae4728bc9b30", 00:16:50.931 "is_configured": true, 00:16:50.931 "data_offset": 2048, 00:16:50.931 "data_size": 63488 00:16:50.931 } 00:16:50.931 ] 00:16:50.931 } 00:16:50.931 } 00:16:50.931 }' 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:50.931 BaseBdev2 00:16:50.931 BaseBdev3 00:16:50.931 BaseBdev4' 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.931 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.191 [2024-11-26 18:01:32.861779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:51.191 [2024-11-26 18:01:32.861924] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.191 [2024-11-26 18:01:32.862077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.191 [2024-11-26 18:01:32.862464] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.191 [2024-11-26 18:01:32.862485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83856 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83856 ']' 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83856 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83856 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83856' 00:16:51.191 killing process with pid 83856 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83856 00:16:51.191 [2024-11-26 18:01:32.909499] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:51.191 18:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83856 00:16:51.761 [2024-11-26 18:01:33.401126] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:53.139 18:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:53.139 00:16:53.139 real 0m12.091s 00:16:53.139 user 0m18.667s 00:16:53.139 sys 0m2.299s 00:16:53.139 ************************************ 00:16:53.139 END TEST raid5f_state_function_test_sb 00:16:53.139 ************************************ 00:16:53.139 18:01:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.139 18:01:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.139 18:01:34 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:53.139 18:01:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:53.139 18:01:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.139 18:01:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:53.139 ************************************ 00:16:53.139 START TEST raid5f_superblock_test 00:16:53.139 ************************************ 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84531 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84531 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84531 ']' 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.139 18:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.139 [2024-11-26 18:01:34.951751] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:16:53.139 [2024-11-26 18:01:34.952014] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84531 ] 00:16:53.399 [2024-11-26 18:01:35.135180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.661 [2024-11-26 18:01:35.280647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.923 [2024-11-26 18:01:35.533720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.923 [2024-11-26 18:01:35.533935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.183 malloc1 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.183 [2024-11-26 18:01:35.863077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:54.183 [2024-11-26 18:01:35.863259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.183 [2024-11-26 18:01:35.863315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:54.183 [2024-11-26 18:01:35.863372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.183 [2024-11-26 18:01:35.866165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.183 [2024-11-26 18:01:35.866262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:54.183 pt1 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.183 malloc2 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.183 [2024-11-26 18:01:35.934543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:54.183 [2024-11-26 18:01:35.934723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.183 [2024-11-26 18:01:35.934763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:54.183 [2024-11-26 18:01:35.934775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.183 [2024-11-26 18:01:35.937417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.183 [2024-11-26 18:01:35.937473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:54.183 pt2 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.183 18:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.183 malloc3 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.183 [2024-11-26 18:01:36.012436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:54.183 [2024-11-26 18:01:36.012597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.183 [2024-11-26 18:01:36.012648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:54.183 [2024-11-26 18:01:36.012693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.183 [2024-11-26 18:01:36.015444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.183 [2024-11-26 18:01:36.015544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:54.183 pt3 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.183 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.444 malloc4 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.444 [2024-11-26 18:01:36.080517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:54.444 [2024-11-26 18:01:36.080694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.444 [2024-11-26 18:01:36.080748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:54.444 [2024-11-26 18:01:36.080821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.444 [2024-11-26 18:01:36.083397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.444 [2024-11-26 18:01:36.083492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:54.444 pt4 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.444 [2024-11-26 18:01:36.092543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:54.444 [2024-11-26 18:01:36.094902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:54.444 [2024-11-26 18:01:36.095104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:54.444 [2024-11-26 18:01:36.095170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:54.444 [2024-11-26 18:01:36.095402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:54.444 [2024-11-26 18:01:36.095421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:54.444 [2024-11-26 18:01:36.095740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:54.444 [2024-11-26 18:01:36.103629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:54.444 [2024-11-26 18:01:36.103660] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:54.444 [2024-11-26 18:01:36.103938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.444 "name": "raid_bdev1", 00:16:54.444 "uuid": "cf979581-8644-4c73-a954-c355dec91a22", 00:16:54.444 "strip_size_kb": 64, 00:16:54.444 "state": "online", 00:16:54.444 "raid_level": "raid5f", 00:16:54.444 "superblock": true, 00:16:54.444 "num_base_bdevs": 4, 00:16:54.444 "num_base_bdevs_discovered": 4, 00:16:54.444 "num_base_bdevs_operational": 4, 00:16:54.444 "base_bdevs_list": [ 00:16:54.444 { 00:16:54.444 "name": "pt1", 00:16:54.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.444 "is_configured": true, 00:16:54.444 "data_offset": 2048, 00:16:54.444 "data_size": 63488 00:16:54.444 }, 00:16:54.444 { 00:16:54.444 "name": "pt2", 00:16:54.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.444 "is_configured": true, 00:16:54.444 "data_offset": 2048, 00:16:54.444 "data_size": 63488 00:16:54.444 }, 00:16:54.444 { 00:16:54.444 "name": "pt3", 00:16:54.444 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:54.444 "is_configured": true, 00:16:54.444 "data_offset": 2048, 00:16:54.444 "data_size": 63488 00:16:54.444 }, 00:16:54.444 { 00:16:54.444 "name": "pt4", 00:16:54.444 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:54.444 "is_configured": true, 00:16:54.444 "data_offset": 2048, 00:16:54.444 "data_size": 63488 00:16:54.444 } 00:16:54.444 ] 00:16:54.444 }' 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.444 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.705 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:54.705 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:54.705 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:54.705 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:54.705 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:54.705 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:54.965 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:54.965 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:54.965 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.965 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.965 [2024-11-26 18:01:36.577534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.965 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.965 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:54.965 "name": "raid_bdev1", 00:16:54.965 "aliases": [ 00:16:54.965 "cf979581-8644-4c73-a954-c355dec91a22" 00:16:54.965 ], 00:16:54.965 "product_name": "Raid Volume", 00:16:54.965 "block_size": 512, 00:16:54.965 "num_blocks": 190464, 00:16:54.965 "uuid": "cf979581-8644-4c73-a954-c355dec91a22", 00:16:54.965 "assigned_rate_limits": { 00:16:54.965 "rw_ios_per_sec": 0, 00:16:54.965 "rw_mbytes_per_sec": 0, 00:16:54.965 "r_mbytes_per_sec": 0, 00:16:54.965 "w_mbytes_per_sec": 0 00:16:54.965 }, 00:16:54.965 "claimed": false, 00:16:54.965 "zoned": false, 00:16:54.965 "supported_io_types": { 00:16:54.965 "read": true, 00:16:54.965 "write": true, 00:16:54.965 "unmap": false, 00:16:54.965 "flush": false, 00:16:54.965 "reset": true, 00:16:54.965 "nvme_admin": false, 00:16:54.965 "nvme_io": false, 00:16:54.965 "nvme_io_md": false, 00:16:54.965 "write_zeroes": true, 00:16:54.965 "zcopy": false, 00:16:54.965 "get_zone_info": false, 00:16:54.965 "zone_management": false, 00:16:54.965 "zone_append": false, 00:16:54.965 "compare": false, 00:16:54.965 "compare_and_write": false, 00:16:54.965 "abort": false, 00:16:54.965 "seek_hole": false, 00:16:54.965 "seek_data": false, 00:16:54.965 "copy": false, 00:16:54.965 "nvme_iov_md": false 00:16:54.965 }, 00:16:54.965 "driver_specific": { 00:16:54.965 "raid": { 00:16:54.965 "uuid": "cf979581-8644-4c73-a954-c355dec91a22", 00:16:54.965 "strip_size_kb": 64, 00:16:54.965 "state": "online", 00:16:54.965 "raid_level": "raid5f", 00:16:54.965 "superblock": true, 00:16:54.965 "num_base_bdevs": 4, 00:16:54.965 "num_base_bdevs_discovered": 4, 00:16:54.965 "num_base_bdevs_operational": 4, 00:16:54.965 "base_bdevs_list": [ 00:16:54.965 { 00:16:54.965 "name": "pt1", 00:16:54.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.965 "is_configured": true, 00:16:54.965 "data_offset": 2048, 00:16:54.965 "data_size": 63488 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "name": "pt2", 00:16:54.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.965 "is_configured": true, 00:16:54.965 "data_offset": 2048, 00:16:54.965 "data_size": 63488 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "name": "pt3", 00:16:54.965 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:54.965 "is_configured": true, 00:16:54.965 "data_offset": 2048, 00:16:54.965 "data_size": 63488 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "name": "pt4", 00:16:54.965 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:54.965 "is_configured": true, 00:16:54.965 "data_offset": 2048, 00:16:54.965 "data_size": 63488 00:16:54.965 } 00:16:54.965 ] 00:16:54.965 } 00:16:54.965 } 00:16:54.965 }' 00:16:54.965 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.965 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:54.965 pt2 00:16:54.965 pt3 00:16:54.966 pt4' 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.966 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.225 [2024-11-26 18:01:36.909004] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cf979581-8644-4c73-a954-c355dec91a22 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cf979581-8644-4c73-a954-c355dec91a22 ']' 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.225 [2024-11-26 18:01:36.936740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:55.225 [2024-11-26 18:01:36.936887] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.225 [2024-11-26 18:01:36.937043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.225 [2024-11-26 18:01:36.937170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.225 [2024-11-26 18:01:36.937236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.225 18:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.225 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.225 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:55.225 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:55.225 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.225 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.225 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.225 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:55.225 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:55.225 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.225 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.225 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.226 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:55.226 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:55.226 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.226 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.226 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.226 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:55.226 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:55.226 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.226 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.484 [2024-11-26 18:01:37.108473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:55.484 [2024-11-26 18:01:37.110985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:55.484 [2024-11-26 18:01:37.111133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:55.484 [2024-11-26 18:01:37.111215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:55.484 [2024-11-26 18:01:37.111306] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:55.484 [2024-11-26 18:01:37.111452] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:55.484 [2024-11-26 18:01:37.111524] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:55.484 request: 00:16:55.484 [2024-11-26 18:01:37.111614] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:55.484 [2024-11-26 18:01:37.111635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:55.484 [2024-11-26 18:01:37.111651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:55.484 { 00:16:55.484 "name": "raid_bdev1", 00:16:55.484 "raid_level": "raid5f", 00:16:55.484 "base_bdevs": [ 00:16:55.484 "malloc1", 00:16:55.484 "malloc2", 00:16:55.484 "malloc3", 00:16:55.484 "malloc4" 00:16:55.484 ], 00:16:55.484 "strip_size_kb": 64, 00:16:55.484 "superblock": false, 00:16:55.484 "method": "bdev_raid_create", 00:16:55.484 "req_id": 1 00:16:55.484 } 00:16:55.484 Got JSON-RPC error response 00:16:55.484 response: 00:16:55.484 { 00:16:55.484 "code": -17, 00:16:55.484 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:55.484 } 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:55.484 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.485 [2024-11-26 18:01:37.168422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:55.485 [2024-11-26 18:01:37.168621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.485 [2024-11-26 18:01:37.168668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:55.485 [2024-11-26 18:01:37.168714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.485 [2024-11-26 18:01:37.171540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.485 [2024-11-26 18:01:37.171652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:55.485 [2024-11-26 18:01:37.171816] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:55.485 [2024-11-26 18:01:37.171923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:55.485 pt1 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.485 "name": "raid_bdev1", 00:16:55.485 "uuid": "cf979581-8644-4c73-a954-c355dec91a22", 00:16:55.485 "strip_size_kb": 64, 00:16:55.485 "state": "configuring", 00:16:55.485 "raid_level": "raid5f", 00:16:55.485 "superblock": true, 00:16:55.485 "num_base_bdevs": 4, 00:16:55.485 "num_base_bdevs_discovered": 1, 00:16:55.485 "num_base_bdevs_operational": 4, 00:16:55.485 "base_bdevs_list": [ 00:16:55.485 { 00:16:55.485 "name": "pt1", 00:16:55.485 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:55.485 "is_configured": true, 00:16:55.485 "data_offset": 2048, 00:16:55.485 "data_size": 63488 00:16:55.485 }, 00:16:55.485 { 00:16:55.485 "name": null, 00:16:55.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.485 "is_configured": false, 00:16:55.485 "data_offset": 2048, 00:16:55.485 "data_size": 63488 00:16:55.485 }, 00:16:55.485 { 00:16:55.485 "name": null, 00:16:55.485 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:55.485 "is_configured": false, 00:16:55.485 "data_offset": 2048, 00:16:55.485 "data_size": 63488 00:16:55.485 }, 00:16:55.485 { 00:16:55.485 "name": null, 00:16:55.485 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:55.485 "is_configured": false, 00:16:55.485 "data_offset": 2048, 00:16:55.485 "data_size": 63488 00:16:55.485 } 00:16:55.485 ] 00:16:55.485 }' 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.485 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.053 [2024-11-26 18:01:37.619811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:56.053 [2024-11-26 18:01:37.620050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.053 [2024-11-26 18:01:37.620116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:56.053 [2024-11-26 18:01:37.620161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.053 [2024-11-26 18:01:37.620785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.053 [2024-11-26 18:01:37.620874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:56.053 [2024-11-26 18:01:37.621054] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:56.053 [2024-11-26 18:01:37.621130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:56.053 pt2 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.053 [2024-11-26 18:01:37.631784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.053 "name": "raid_bdev1", 00:16:56.053 "uuid": "cf979581-8644-4c73-a954-c355dec91a22", 00:16:56.053 "strip_size_kb": 64, 00:16:56.053 "state": "configuring", 00:16:56.053 "raid_level": "raid5f", 00:16:56.053 "superblock": true, 00:16:56.053 "num_base_bdevs": 4, 00:16:56.053 "num_base_bdevs_discovered": 1, 00:16:56.053 "num_base_bdevs_operational": 4, 00:16:56.053 "base_bdevs_list": [ 00:16:56.053 { 00:16:56.053 "name": "pt1", 00:16:56.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:56.053 "is_configured": true, 00:16:56.053 "data_offset": 2048, 00:16:56.053 "data_size": 63488 00:16:56.053 }, 00:16:56.053 { 00:16:56.053 "name": null, 00:16:56.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.053 "is_configured": false, 00:16:56.053 "data_offset": 0, 00:16:56.053 "data_size": 63488 00:16:56.053 }, 00:16:56.053 { 00:16:56.053 "name": null, 00:16:56.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:56.053 "is_configured": false, 00:16:56.053 "data_offset": 2048, 00:16:56.053 "data_size": 63488 00:16:56.053 }, 00:16:56.053 { 00:16:56.053 "name": null, 00:16:56.053 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:56.053 "is_configured": false, 00:16:56.053 "data_offset": 2048, 00:16:56.053 "data_size": 63488 00:16:56.053 } 00:16:56.053 ] 00:16:56.053 }' 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.053 18:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.313 [2024-11-26 18:01:38.067089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:56.313 [2024-11-26 18:01:38.067298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.313 [2024-11-26 18:01:38.067350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:56.313 [2024-11-26 18:01:38.067397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.313 [2024-11-26 18:01:38.068004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.313 [2024-11-26 18:01:38.068098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:56.313 [2024-11-26 18:01:38.068256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:56.313 [2024-11-26 18:01:38.068321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:56.313 pt2 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.313 [2024-11-26 18:01:38.078974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:56.313 [2024-11-26 18:01:38.079118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.313 [2024-11-26 18:01:38.079171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:56.313 [2024-11-26 18:01:38.079215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.313 [2024-11-26 18:01:38.079702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.313 [2024-11-26 18:01:38.079770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:56.313 [2024-11-26 18:01:38.079902] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:56.313 [2024-11-26 18:01:38.079976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:56.313 pt3 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.313 [2024-11-26 18:01:38.090928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:56.313 [2024-11-26 18:01:38.091056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.313 [2024-11-26 18:01:38.091099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:56.313 [2024-11-26 18:01:38.091139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.313 [2024-11-26 18:01:38.091627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.313 [2024-11-26 18:01:38.091698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:56.313 [2024-11-26 18:01:38.091818] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:56.313 [2024-11-26 18:01:38.091884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:56.313 [2024-11-26 18:01:38.092123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:56.313 [2024-11-26 18:01:38.092176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:56.313 [2024-11-26 18:01:38.092521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:56.313 [2024-11-26 18:01:38.101303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:56.313 [2024-11-26 18:01:38.101381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:56.313 [2024-11-26 18:01:38.101673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.313 pt4 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.313 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.314 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.314 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.314 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.314 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.314 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.314 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.314 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.314 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.314 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.314 "name": "raid_bdev1", 00:16:56.314 "uuid": "cf979581-8644-4c73-a954-c355dec91a22", 00:16:56.314 "strip_size_kb": 64, 00:16:56.314 "state": "online", 00:16:56.314 "raid_level": "raid5f", 00:16:56.314 "superblock": true, 00:16:56.314 "num_base_bdevs": 4, 00:16:56.314 "num_base_bdevs_discovered": 4, 00:16:56.314 "num_base_bdevs_operational": 4, 00:16:56.314 "base_bdevs_list": [ 00:16:56.314 { 00:16:56.314 "name": "pt1", 00:16:56.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:56.314 "is_configured": true, 00:16:56.314 "data_offset": 2048, 00:16:56.314 "data_size": 63488 00:16:56.314 }, 00:16:56.314 { 00:16:56.314 "name": "pt2", 00:16:56.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.314 "is_configured": true, 00:16:56.314 "data_offset": 2048, 00:16:56.314 "data_size": 63488 00:16:56.314 }, 00:16:56.314 { 00:16:56.314 "name": "pt3", 00:16:56.314 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:56.314 "is_configured": true, 00:16:56.314 "data_offset": 2048, 00:16:56.314 "data_size": 63488 00:16:56.314 }, 00:16:56.314 { 00:16:56.314 "name": "pt4", 00:16:56.314 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:56.314 "is_configured": true, 00:16:56.314 "data_offset": 2048, 00:16:56.314 "data_size": 63488 00:16:56.314 } 00:16:56.314 ] 00:16:56.314 }' 00:16:56.314 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.314 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.923 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:56.923 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:56.923 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:56.923 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:56.923 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.924 [2024-11-26 18:01:38.540017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:56.924 "name": "raid_bdev1", 00:16:56.924 "aliases": [ 00:16:56.924 "cf979581-8644-4c73-a954-c355dec91a22" 00:16:56.924 ], 00:16:56.924 "product_name": "Raid Volume", 00:16:56.924 "block_size": 512, 00:16:56.924 "num_blocks": 190464, 00:16:56.924 "uuid": "cf979581-8644-4c73-a954-c355dec91a22", 00:16:56.924 "assigned_rate_limits": { 00:16:56.924 "rw_ios_per_sec": 0, 00:16:56.924 "rw_mbytes_per_sec": 0, 00:16:56.924 "r_mbytes_per_sec": 0, 00:16:56.924 "w_mbytes_per_sec": 0 00:16:56.924 }, 00:16:56.924 "claimed": false, 00:16:56.924 "zoned": false, 00:16:56.924 "supported_io_types": { 00:16:56.924 "read": true, 00:16:56.924 "write": true, 00:16:56.924 "unmap": false, 00:16:56.924 "flush": false, 00:16:56.924 "reset": true, 00:16:56.924 "nvme_admin": false, 00:16:56.924 "nvme_io": false, 00:16:56.924 "nvme_io_md": false, 00:16:56.924 "write_zeroes": true, 00:16:56.924 "zcopy": false, 00:16:56.924 "get_zone_info": false, 00:16:56.924 "zone_management": false, 00:16:56.924 "zone_append": false, 00:16:56.924 "compare": false, 00:16:56.924 "compare_and_write": false, 00:16:56.924 "abort": false, 00:16:56.924 "seek_hole": false, 00:16:56.924 "seek_data": false, 00:16:56.924 "copy": false, 00:16:56.924 "nvme_iov_md": false 00:16:56.924 }, 00:16:56.924 "driver_specific": { 00:16:56.924 "raid": { 00:16:56.924 "uuid": "cf979581-8644-4c73-a954-c355dec91a22", 00:16:56.924 "strip_size_kb": 64, 00:16:56.924 "state": "online", 00:16:56.924 "raid_level": "raid5f", 00:16:56.924 "superblock": true, 00:16:56.924 "num_base_bdevs": 4, 00:16:56.924 "num_base_bdevs_discovered": 4, 00:16:56.924 "num_base_bdevs_operational": 4, 00:16:56.924 "base_bdevs_list": [ 00:16:56.924 { 00:16:56.924 "name": "pt1", 00:16:56.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:56.924 "is_configured": true, 00:16:56.924 "data_offset": 2048, 00:16:56.924 "data_size": 63488 00:16:56.924 }, 00:16:56.924 { 00:16:56.924 "name": "pt2", 00:16:56.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.924 "is_configured": true, 00:16:56.924 "data_offset": 2048, 00:16:56.924 "data_size": 63488 00:16:56.924 }, 00:16:56.924 { 00:16:56.924 "name": "pt3", 00:16:56.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:56.924 "is_configured": true, 00:16:56.924 "data_offset": 2048, 00:16:56.924 "data_size": 63488 00:16:56.924 }, 00:16:56.924 { 00:16:56.924 "name": "pt4", 00:16:56.924 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:56.924 "is_configured": true, 00:16:56.924 "data_offset": 2048, 00:16:56.924 "data_size": 63488 00:16:56.924 } 00:16:56.924 ] 00:16:56.924 } 00:16:56.924 } 00:16:56.924 }' 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:56.924 pt2 00:16:56.924 pt3 00:16:56.924 pt4' 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.924 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.183 [2024-11-26 18:01:38.851429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cf979581-8644-4c73-a954-c355dec91a22 '!=' cf979581-8644-4c73-a954-c355dec91a22 ']' 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.183 [2024-11-26 18:01:38.879262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.183 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.183 "name": "raid_bdev1", 00:16:57.183 "uuid": "cf979581-8644-4c73-a954-c355dec91a22", 00:16:57.183 "strip_size_kb": 64, 00:16:57.183 "state": "online", 00:16:57.183 "raid_level": "raid5f", 00:16:57.183 "superblock": true, 00:16:57.183 "num_base_bdevs": 4, 00:16:57.183 "num_base_bdevs_discovered": 3, 00:16:57.183 "num_base_bdevs_operational": 3, 00:16:57.183 "base_bdevs_list": [ 00:16:57.183 { 00:16:57.183 "name": null, 00:16:57.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.183 "is_configured": false, 00:16:57.183 "data_offset": 0, 00:16:57.183 "data_size": 63488 00:16:57.183 }, 00:16:57.183 { 00:16:57.183 "name": "pt2", 00:16:57.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.183 "is_configured": true, 00:16:57.183 "data_offset": 2048, 00:16:57.183 "data_size": 63488 00:16:57.183 }, 00:16:57.183 { 00:16:57.183 "name": "pt3", 00:16:57.184 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.184 "is_configured": true, 00:16:57.184 "data_offset": 2048, 00:16:57.184 "data_size": 63488 00:16:57.184 }, 00:16:57.184 { 00:16:57.184 "name": "pt4", 00:16:57.184 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:57.184 "is_configured": true, 00:16:57.184 "data_offset": 2048, 00:16:57.184 "data_size": 63488 00:16:57.184 } 00:16:57.184 ] 00:16:57.184 }' 00:16:57.184 18:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.184 18:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.758 [2024-11-26 18:01:39.346435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.758 [2024-11-26 18:01:39.346595] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.758 [2024-11-26 18:01:39.346744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.758 [2024-11-26 18:01:39.346899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.758 [2024-11-26 18:01:39.346955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.758 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.759 [2024-11-26 18:01:39.446206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:57.759 [2024-11-26 18:01:39.446384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.759 [2024-11-26 18:01:39.446449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:57.759 [2024-11-26 18:01:39.446496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.759 [2024-11-26 18:01:39.449614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.759 [2024-11-26 18:01:39.449715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:57.759 [2024-11-26 18:01:39.449875] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:57.759 [2024-11-26 18:01:39.449991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.759 pt2 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.759 "name": "raid_bdev1", 00:16:57.759 "uuid": "cf979581-8644-4c73-a954-c355dec91a22", 00:16:57.759 "strip_size_kb": 64, 00:16:57.759 "state": "configuring", 00:16:57.759 "raid_level": "raid5f", 00:16:57.759 "superblock": true, 00:16:57.759 "num_base_bdevs": 4, 00:16:57.759 "num_base_bdevs_discovered": 1, 00:16:57.759 "num_base_bdevs_operational": 3, 00:16:57.759 "base_bdevs_list": [ 00:16:57.759 { 00:16:57.759 "name": null, 00:16:57.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.759 "is_configured": false, 00:16:57.759 "data_offset": 2048, 00:16:57.759 "data_size": 63488 00:16:57.759 }, 00:16:57.759 { 00:16:57.759 "name": "pt2", 00:16:57.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.759 "is_configured": true, 00:16:57.759 "data_offset": 2048, 00:16:57.759 "data_size": 63488 00:16:57.759 }, 00:16:57.759 { 00:16:57.759 "name": null, 00:16:57.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.759 "is_configured": false, 00:16:57.759 "data_offset": 2048, 00:16:57.759 "data_size": 63488 00:16:57.759 }, 00:16:57.759 { 00:16:57.759 "name": null, 00:16:57.759 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:57.759 "is_configured": false, 00:16:57.759 "data_offset": 2048, 00:16:57.759 "data_size": 63488 00:16:57.759 } 00:16:57.759 ] 00:16:57.759 }' 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.759 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.326 [2024-11-26 18:01:39.889722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:58.326 [2024-11-26 18:01:39.889970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.326 [2024-11-26 18:01:39.890060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:58.326 [2024-11-26 18:01:39.890105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.326 [2024-11-26 18:01:39.890718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.326 [2024-11-26 18:01:39.890804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:58.326 [2024-11-26 18:01:39.890972] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:58.326 [2024-11-26 18:01:39.891050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:58.326 pt3 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.326 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.326 "name": "raid_bdev1", 00:16:58.326 "uuid": "cf979581-8644-4c73-a954-c355dec91a22", 00:16:58.326 "strip_size_kb": 64, 00:16:58.326 "state": "configuring", 00:16:58.326 "raid_level": "raid5f", 00:16:58.326 "superblock": true, 00:16:58.326 "num_base_bdevs": 4, 00:16:58.326 "num_base_bdevs_discovered": 2, 00:16:58.326 "num_base_bdevs_operational": 3, 00:16:58.326 "base_bdevs_list": [ 00:16:58.326 { 00:16:58.326 "name": null, 00:16:58.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.326 "is_configured": false, 00:16:58.326 "data_offset": 2048, 00:16:58.326 "data_size": 63488 00:16:58.326 }, 00:16:58.326 { 00:16:58.326 "name": "pt2", 00:16:58.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.326 "is_configured": true, 00:16:58.326 "data_offset": 2048, 00:16:58.326 "data_size": 63488 00:16:58.326 }, 00:16:58.326 { 00:16:58.326 "name": "pt3", 00:16:58.326 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.326 "is_configured": true, 00:16:58.326 "data_offset": 2048, 00:16:58.326 "data_size": 63488 00:16:58.326 }, 00:16:58.326 { 00:16:58.327 "name": null, 00:16:58.327 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:58.327 "is_configured": false, 00:16:58.327 "data_offset": 2048, 00:16:58.327 "data_size": 63488 00:16:58.327 } 00:16:58.327 ] 00:16:58.327 }' 00:16:58.327 18:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.327 18:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.585 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:58.585 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:58.585 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.586 [2024-11-26 18:01:40.348942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:58.586 [2024-11-26 18:01:40.349174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.586 [2024-11-26 18:01:40.349230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:58.586 [2024-11-26 18:01:40.349273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.586 [2024-11-26 18:01:40.349945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.586 [2024-11-26 18:01:40.350054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:58.586 [2024-11-26 18:01:40.350236] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:58.586 [2024-11-26 18:01:40.350321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:58.586 [2024-11-26 18:01:40.350554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:58.586 [2024-11-26 18:01:40.350604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:58.586 [2024-11-26 18:01:40.350961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:58.586 [2024-11-26 18:01:40.358677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:58.586 [2024-11-26 18:01:40.358754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:58.586 [2024-11-26 18:01:40.359189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.586 pt4 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.586 "name": "raid_bdev1", 00:16:58.586 "uuid": "cf979581-8644-4c73-a954-c355dec91a22", 00:16:58.586 "strip_size_kb": 64, 00:16:58.586 "state": "online", 00:16:58.586 "raid_level": "raid5f", 00:16:58.586 "superblock": true, 00:16:58.586 "num_base_bdevs": 4, 00:16:58.586 "num_base_bdevs_discovered": 3, 00:16:58.586 "num_base_bdevs_operational": 3, 00:16:58.586 "base_bdevs_list": [ 00:16:58.586 { 00:16:58.586 "name": null, 00:16:58.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.586 "is_configured": false, 00:16:58.586 "data_offset": 2048, 00:16:58.586 "data_size": 63488 00:16:58.586 }, 00:16:58.586 { 00:16:58.586 "name": "pt2", 00:16:58.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.586 "is_configured": true, 00:16:58.586 "data_offset": 2048, 00:16:58.586 "data_size": 63488 00:16:58.586 }, 00:16:58.586 { 00:16:58.586 "name": "pt3", 00:16:58.586 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.586 "is_configured": true, 00:16:58.586 "data_offset": 2048, 00:16:58.586 "data_size": 63488 00:16:58.586 }, 00:16:58.586 { 00:16:58.586 "name": "pt4", 00:16:58.586 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:58.586 "is_configured": true, 00:16:58.586 "data_offset": 2048, 00:16:58.586 "data_size": 63488 00:16:58.586 } 00:16:58.586 ] 00:16:58.586 }' 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.586 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.152 [2024-11-26 18:01:40.805799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.152 [2024-11-26 18:01:40.805965] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.152 [2024-11-26 18:01:40.806152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.152 [2024-11-26 18:01:40.806323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.152 [2024-11-26 18:01:40.806398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.152 [2024-11-26 18:01:40.869662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:59.152 [2024-11-26 18:01:40.869859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.152 [2024-11-26 18:01:40.869943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:59.152 [2024-11-26 18:01:40.869999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.152 [2024-11-26 18:01:40.873127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.152 [2024-11-26 18:01:40.873232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:59.152 [2024-11-26 18:01:40.873405] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:59.152 [2024-11-26 18:01:40.873537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:59.152 [2024-11-26 18:01:40.873780] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:59.152 [2024-11-26 18:01:40.873862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.152 [2024-11-26 18:01:40.873943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:59.152 [2024-11-26 18:01:40.874117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.152 pt1 00:16:59.152 [2024-11-26 18:01:40.874377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.152 "name": "raid_bdev1", 00:16:59.152 "uuid": "cf979581-8644-4c73-a954-c355dec91a22", 00:16:59.152 "strip_size_kb": 64, 00:16:59.152 "state": "configuring", 00:16:59.152 "raid_level": "raid5f", 00:16:59.152 "superblock": true, 00:16:59.152 "num_base_bdevs": 4, 00:16:59.152 "num_base_bdevs_discovered": 2, 00:16:59.152 "num_base_bdevs_operational": 3, 00:16:59.152 "base_bdevs_list": [ 00:16:59.152 { 00:16:59.152 "name": null, 00:16:59.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.152 "is_configured": false, 00:16:59.152 "data_offset": 2048, 00:16:59.152 "data_size": 63488 00:16:59.152 }, 00:16:59.152 { 00:16:59.152 "name": "pt2", 00:16:59.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.152 "is_configured": true, 00:16:59.152 "data_offset": 2048, 00:16:59.152 "data_size": 63488 00:16:59.152 }, 00:16:59.152 { 00:16:59.152 "name": "pt3", 00:16:59.152 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.152 "is_configured": true, 00:16:59.152 "data_offset": 2048, 00:16:59.152 "data_size": 63488 00:16:59.152 }, 00:16:59.152 { 00:16:59.152 "name": null, 00:16:59.152 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:59.152 "is_configured": false, 00:16:59.152 "data_offset": 2048, 00:16:59.152 "data_size": 63488 00:16:59.152 } 00:16:59.152 ] 00:16:59.152 }' 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.152 18:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.719 [2024-11-26 18:01:41.353321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:59.719 [2024-11-26 18:01:41.353528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.719 [2024-11-26 18:01:41.353590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:59.719 [2024-11-26 18:01:41.353637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.719 [2024-11-26 18:01:41.354319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.719 [2024-11-26 18:01:41.354409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:59.719 [2024-11-26 18:01:41.354581] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:59.719 [2024-11-26 18:01:41.354653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:59.719 [2024-11-26 18:01:41.354892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:59.719 [2024-11-26 18:01:41.354946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:59.719 [2024-11-26 18:01:41.355330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:59.719 [2024-11-26 18:01:41.364147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:59.719 pt4 00:16:59.719 [2024-11-26 18:01:41.364245] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:59.719 [2024-11-26 18:01:41.364632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.719 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.720 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.720 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.720 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.720 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.720 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.720 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.720 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.720 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.720 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.720 "name": "raid_bdev1", 00:16:59.720 "uuid": "cf979581-8644-4c73-a954-c355dec91a22", 00:16:59.720 "strip_size_kb": 64, 00:16:59.720 "state": "online", 00:16:59.720 "raid_level": "raid5f", 00:16:59.720 "superblock": true, 00:16:59.720 "num_base_bdevs": 4, 00:16:59.720 "num_base_bdevs_discovered": 3, 00:16:59.720 "num_base_bdevs_operational": 3, 00:16:59.720 "base_bdevs_list": [ 00:16:59.720 { 00:16:59.720 "name": null, 00:16:59.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.720 "is_configured": false, 00:16:59.720 "data_offset": 2048, 00:16:59.720 "data_size": 63488 00:16:59.720 }, 00:16:59.720 { 00:16:59.720 "name": "pt2", 00:16:59.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.720 "is_configured": true, 00:16:59.720 "data_offset": 2048, 00:16:59.720 "data_size": 63488 00:16:59.720 }, 00:16:59.720 { 00:16:59.720 "name": "pt3", 00:16:59.720 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.720 "is_configured": true, 00:16:59.720 "data_offset": 2048, 00:16:59.720 "data_size": 63488 00:16:59.720 }, 00:16:59.720 { 00:16:59.720 "name": "pt4", 00:16:59.720 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:59.720 "is_configured": true, 00:16:59.720 "data_offset": 2048, 00:16:59.720 "data_size": 63488 00:16:59.720 } 00:16:59.720 ] 00:16:59.720 }' 00:16:59.720 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.720 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.978 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:59.978 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.978 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.978 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:59.979 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.237 [2024-11-26 18:01:41.876367] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' cf979581-8644-4c73-a954-c355dec91a22 '!=' cf979581-8644-4c73-a954-c355dec91a22 ']' 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84531 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84531 ']' 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84531 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84531 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84531' 00:17:00.237 killing process with pid 84531 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84531 00:17:00.237 [2024-11-26 18:01:41.943960] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.237 [2024-11-26 18:01:41.944138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.237 18:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84531 00:17:00.237 [2024-11-26 18:01:41.944250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.237 [2024-11-26 18:01:41.944276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:00.805 [2024-11-26 18:01:42.453232] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.182 18:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:02.182 00:17:02.182 real 0m9.057s 00:17:02.182 user 0m13.802s 00:17:02.182 sys 0m1.663s 00:17:02.182 ************************************ 00:17:02.182 END TEST raid5f_superblock_test 00:17:02.182 ************************************ 00:17:02.182 18:01:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.182 18:01:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.182 18:01:43 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:02.182 18:01:43 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:02.182 18:01:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:02.182 18:01:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.182 18:01:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:02.182 ************************************ 00:17:02.182 START TEST raid5f_rebuild_test 00:17:02.182 ************************************ 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85022 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85022 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85022 ']' 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.182 18:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.441 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:02.441 Zero copy mechanism will not be used. 00:17:02.441 [2024-11-26 18:01:44.077025] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:17:02.441 [2024-11-26 18:01:44.077168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85022 ] 00:17:02.441 [2024-11-26 18:01:44.254413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.699 [2024-11-26 18:01:44.414869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.958 [2024-11-26 18:01:44.682911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.958 [2024-11-26 18:01:44.682976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.217 18:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.217 18:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:03.217 18:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:03.217 18:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:03.217 18:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.217 18:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.217 BaseBdev1_malloc 00:17:03.217 18:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.217 18:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:03.217 18:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.217 18:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.217 [2024-11-26 18:01:44.992082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:03.217 [2024-11-26 18:01:44.992270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.217 [2024-11-26 18:01:44.992327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:03.217 [2024-11-26 18:01:44.992375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.217 [2024-11-26 18:01:44.995346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.217 [2024-11-26 18:01:44.995453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:03.217 BaseBdev1 00:17:03.217 18:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.217 18:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:03.217 18:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:03.217 18:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.217 18:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.217 BaseBdev2_malloc 00:17:03.217 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.217 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:03.217 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.218 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.218 [2024-11-26 18:01:45.064763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:03.218 [2024-11-26 18:01:45.064870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.218 [2024-11-26 18:01:45.064930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:03.218 [2024-11-26 18:01:45.064950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.218 [2024-11-26 18:01:45.068424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.218 [2024-11-26 18:01:45.068481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:03.218 BaseBdev2 00:17:03.218 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.218 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:03.218 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:03.218 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.218 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.478 BaseBdev3_malloc 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.479 [2024-11-26 18:01:45.142559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:03.479 [2024-11-26 18:01:45.142774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.479 [2024-11-26 18:01:45.142847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:03.479 [2024-11-26 18:01:45.142907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.479 [2024-11-26 18:01:45.146039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.479 [2024-11-26 18:01:45.146151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:03.479 BaseBdev3 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.479 BaseBdev4_malloc 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.479 [2024-11-26 18:01:45.211042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:03.479 [2024-11-26 18:01:45.211204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.479 [2024-11-26 18:01:45.211252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:03.479 [2024-11-26 18:01:45.211296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.479 [2024-11-26 18:01:45.214037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.479 [2024-11-26 18:01:45.214122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:03.479 BaseBdev4 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.479 spare_malloc 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.479 spare_delay 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.479 [2024-11-26 18:01:45.289778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:03.479 [2024-11-26 18:01:45.289943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.479 [2024-11-26 18:01:45.289971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:03.479 [2024-11-26 18:01:45.289986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.479 [2024-11-26 18:01:45.292673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.479 [2024-11-26 18:01:45.292718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:03.479 spare 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.479 [2024-11-26 18:01:45.301813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.479 [2024-11-26 18:01:45.304162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.479 [2024-11-26 18:01:45.304282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:03.479 [2024-11-26 18:01:45.304369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:03.479 [2024-11-26 18:01:45.304505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:03.479 [2024-11-26 18:01:45.304562] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:03.479 [2024-11-26 18:01:45.304893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:03.479 [2024-11-26 18:01:45.313870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:03.479 [2024-11-26 18:01:45.313954] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:03.479 [2024-11-26 18:01:45.314228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.479 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.739 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.739 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.739 "name": "raid_bdev1", 00:17:03.739 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:03.739 "strip_size_kb": 64, 00:17:03.739 "state": "online", 00:17:03.739 "raid_level": "raid5f", 00:17:03.739 "superblock": false, 00:17:03.739 "num_base_bdevs": 4, 00:17:03.739 "num_base_bdevs_discovered": 4, 00:17:03.739 "num_base_bdevs_operational": 4, 00:17:03.739 "base_bdevs_list": [ 00:17:03.739 { 00:17:03.739 "name": "BaseBdev1", 00:17:03.739 "uuid": "1c99a1a0-4465-5bd8-9bb7-1842f9b85875", 00:17:03.739 "is_configured": true, 00:17:03.739 "data_offset": 0, 00:17:03.739 "data_size": 65536 00:17:03.739 }, 00:17:03.739 { 00:17:03.739 "name": "BaseBdev2", 00:17:03.739 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:03.739 "is_configured": true, 00:17:03.739 "data_offset": 0, 00:17:03.739 "data_size": 65536 00:17:03.739 }, 00:17:03.739 { 00:17:03.739 "name": "BaseBdev3", 00:17:03.739 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:03.739 "is_configured": true, 00:17:03.739 "data_offset": 0, 00:17:03.739 "data_size": 65536 00:17:03.739 }, 00:17:03.739 { 00:17:03.739 "name": "BaseBdev4", 00:17:03.739 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:03.739 "is_configured": true, 00:17:03.739 "data_offset": 0, 00:17:03.739 "data_size": 65536 00:17:03.739 } 00:17:03.739 ] 00:17:03.739 }' 00:17:03.739 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.739 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.998 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:03.998 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.998 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.998 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.998 [2024-11-26 18:01:45.777258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.998 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.998 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:03.998 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.998 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:03.998 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.998 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.998 18:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.256 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:04.256 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:04.256 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:04.256 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:04.256 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:04.256 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:04.256 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:04.257 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:04.257 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:04.257 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:04.257 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:04.257 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:04.257 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:04.257 18:01:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:04.257 [2024-11-26 18:01:46.084486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:04.257 /dev/nbd0 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:04.517 1+0 records in 00:17:04.517 1+0 records out 00:17:04.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447114 s, 9.2 MB/s 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:04.517 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:05.086 512+0 records in 00:17:05.086 512+0 records out 00:17:05.086 100663296 bytes (101 MB, 96 MiB) copied, 0.537821 s, 187 MB/s 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:05.086 [2024-11-26 18:01:46.936113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.086 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.346 [2024-11-26 18:01:46.955371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.346 18:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.346 18:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.346 "name": "raid_bdev1", 00:17:05.346 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:05.346 "strip_size_kb": 64, 00:17:05.346 "state": "online", 00:17:05.346 "raid_level": "raid5f", 00:17:05.346 "superblock": false, 00:17:05.346 "num_base_bdevs": 4, 00:17:05.346 "num_base_bdevs_discovered": 3, 00:17:05.346 "num_base_bdevs_operational": 3, 00:17:05.346 "base_bdevs_list": [ 00:17:05.346 { 00:17:05.346 "name": null, 00:17:05.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.346 "is_configured": false, 00:17:05.346 "data_offset": 0, 00:17:05.346 "data_size": 65536 00:17:05.346 }, 00:17:05.346 { 00:17:05.346 "name": "BaseBdev2", 00:17:05.346 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:05.347 "is_configured": true, 00:17:05.347 "data_offset": 0, 00:17:05.347 "data_size": 65536 00:17:05.347 }, 00:17:05.347 { 00:17:05.347 "name": "BaseBdev3", 00:17:05.347 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:05.347 "is_configured": true, 00:17:05.347 "data_offset": 0, 00:17:05.347 "data_size": 65536 00:17:05.347 }, 00:17:05.347 { 00:17:05.347 "name": "BaseBdev4", 00:17:05.347 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:05.347 "is_configured": true, 00:17:05.347 "data_offset": 0, 00:17:05.347 "data_size": 65536 00:17:05.347 } 00:17:05.347 ] 00:17:05.347 }' 00:17:05.347 18:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.347 18:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.606 18:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:05.606 18:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.606 18:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.606 [2024-11-26 18:01:47.414582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.606 [2024-11-26 18:01:47.433949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:05.606 18:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.606 18:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:05.606 [2024-11-26 18:01:47.446327] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.985 "name": "raid_bdev1", 00:17:06.985 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:06.985 "strip_size_kb": 64, 00:17:06.985 "state": "online", 00:17:06.985 "raid_level": "raid5f", 00:17:06.985 "superblock": false, 00:17:06.985 "num_base_bdevs": 4, 00:17:06.985 "num_base_bdevs_discovered": 4, 00:17:06.985 "num_base_bdevs_operational": 4, 00:17:06.985 "process": { 00:17:06.985 "type": "rebuild", 00:17:06.985 "target": "spare", 00:17:06.985 "progress": { 00:17:06.985 "blocks": 19200, 00:17:06.985 "percent": 9 00:17:06.985 } 00:17:06.985 }, 00:17:06.985 "base_bdevs_list": [ 00:17:06.985 { 00:17:06.985 "name": "spare", 00:17:06.985 "uuid": "cffff2b8-40fe-52a0-8343-3638e2518dcc", 00:17:06.985 "is_configured": true, 00:17:06.985 "data_offset": 0, 00:17:06.985 "data_size": 65536 00:17:06.985 }, 00:17:06.985 { 00:17:06.985 "name": "BaseBdev2", 00:17:06.985 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:06.985 "is_configured": true, 00:17:06.985 "data_offset": 0, 00:17:06.985 "data_size": 65536 00:17:06.985 }, 00:17:06.985 { 00:17:06.985 "name": "BaseBdev3", 00:17:06.985 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:06.985 "is_configured": true, 00:17:06.985 "data_offset": 0, 00:17:06.985 "data_size": 65536 00:17:06.985 }, 00:17:06.985 { 00:17:06.985 "name": "BaseBdev4", 00:17:06.985 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:06.985 "is_configured": true, 00:17:06.985 "data_offset": 0, 00:17:06.985 "data_size": 65536 00:17:06.985 } 00:17:06.985 ] 00:17:06.985 }' 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.985 [2024-11-26 18:01:48.602134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.985 [2024-11-26 18:01:48.656026] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:06.985 [2024-11-26 18:01:48.656269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.985 [2024-11-26 18:01:48.656294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.985 [2024-11-26 18:01:48.656307] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.985 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.985 "name": "raid_bdev1", 00:17:06.985 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:06.985 "strip_size_kb": 64, 00:17:06.985 "state": "online", 00:17:06.985 "raid_level": "raid5f", 00:17:06.985 "superblock": false, 00:17:06.985 "num_base_bdevs": 4, 00:17:06.985 "num_base_bdevs_discovered": 3, 00:17:06.985 "num_base_bdevs_operational": 3, 00:17:06.985 "base_bdevs_list": [ 00:17:06.985 { 00:17:06.985 "name": null, 00:17:06.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.985 "is_configured": false, 00:17:06.985 "data_offset": 0, 00:17:06.985 "data_size": 65536 00:17:06.985 }, 00:17:06.985 { 00:17:06.985 "name": "BaseBdev2", 00:17:06.985 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:06.985 "is_configured": true, 00:17:06.985 "data_offset": 0, 00:17:06.985 "data_size": 65536 00:17:06.985 }, 00:17:06.986 { 00:17:06.986 "name": "BaseBdev3", 00:17:06.986 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:06.986 "is_configured": true, 00:17:06.986 "data_offset": 0, 00:17:06.986 "data_size": 65536 00:17:06.986 }, 00:17:06.986 { 00:17:06.986 "name": "BaseBdev4", 00:17:06.986 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:06.986 "is_configured": true, 00:17:06.986 "data_offset": 0, 00:17:06.986 "data_size": 65536 00:17:06.986 } 00:17:06.986 ] 00:17:06.986 }' 00:17:06.986 18:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.986 18:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.555 18:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:07.555 18:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.555 18:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:07.555 18:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:07.555 18:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.555 18:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.555 18:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.555 18:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.555 18:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.555 18:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.555 18:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.555 "name": "raid_bdev1", 00:17:07.555 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:07.555 "strip_size_kb": 64, 00:17:07.555 "state": "online", 00:17:07.555 "raid_level": "raid5f", 00:17:07.555 "superblock": false, 00:17:07.555 "num_base_bdevs": 4, 00:17:07.555 "num_base_bdevs_discovered": 3, 00:17:07.555 "num_base_bdevs_operational": 3, 00:17:07.555 "base_bdevs_list": [ 00:17:07.555 { 00:17:07.555 "name": null, 00:17:07.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.555 "is_configured": false, 00:17:07.555 "data_offset": 0, 00:17:07.555 "data_size": 65536 00:17:07.555 }, 00:17:07.556 { 00:17:07.556 "name": "BaseBdev2", 00:17:07.556 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:07.556 "is_configured": true, 00:17:07.556 "data_offset": 0, 00:17:07.556 "data_size": 65536 00:17:07.556 }, 00:17:07.556 { 00:17:07.556 "name": "BaseBdev3", 00:17:07.556 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:07.556 "is_configured": true, 00:17:07.556 "data_offset": 0, 00:17:07.556 "data_size": 65536 00:17:07.556 }, 00:17:07.556 { 00:17:07.556 "name": "BaseBdev4", 00:17:07.556 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:07.556 "is_configured": true, 00:17:07.556 "data_offset": 0, 00:17:07.556 "data_size": 65536 00:17:07.556 } 00:17:07.556 ] 00:17:07.556 }' 00:17:07.556 18:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.556 18:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:07.556 18:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.556 18:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:07.556 18:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:07.556 18:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.556 18:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.556 [2024-11-26 18:01:49.265805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.556 [2024-11-26 18:01:49.283320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:07.556 18:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.556 18:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:07.556 [2024-11-26 18:01:49.294169] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:08.495 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.495 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.495 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.495 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.495 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.495 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.495 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.495 18:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.495 18:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.495 18:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.495 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.495 "name": "raid_bdev1", 00:17:08.495 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:08.495 "strip_size_kb": 64, 00:17:08.495 "state": "online", 00:17:08.495 "raid_level": "raid5f", 00:17:08.495 "superblock": false, 00:17:08.495 "num_base_bdevs": 4, 00:17:08.495 "num_base_bdevs_discovered": 4, 00:17:08.495 "num_base_bdevs_operational": 4, 00:17:08.495 "process": { 00:17:08.495 "type": "rebuild", 00:17:08.495 "target": "spare", 00:17:08.495 "progress": { 00:17:08.495 "blocks": 17280, 00:17:08.495 "percent": 8 00:17:08.495 } 00:17:08.495 }, 00:17:08.495 "base_bdevs_list": [ 00:17:08.495 { 00:17:08.495 "name": "spare", 00:17:08.495 "uuid": "cffff2b8-40fe-52a0-8343-3638e2518dcc", 00:17:08.495 "is_configured": true, 00:17:08.495 "data_offset": 0, 00:17:08.495 "data_size": 65536 00:17:08.495 }, 00:17:08.495 { 00:17:08.495 "name": "BaseBdev2", 00:17:08.495 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:08.495 "is_configured": true, 00:17:08.495 "data_offset": 0, 00:17:08.495 "data_size": 65536 00:17:08.495 }, 00:17:08.495 { 00:17:08.495 "name": "BaseBdev3", 00:17:08.495 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:08.495 "is_configured": true, 00:17:08.495 "data_offset": 0, 00:17:08.495 "data_size": 65536 00:17:08.495 }, 00:17:08.495 { 00:17:08.495 "name": "BaseBdev4", 00:17:08.495 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:08.495 "is_configured": true, 00:17:08.495 "data_offset": 0, 00:17:08.495 "data_size": 65536 00:17:08.495 } 00:17:08.495 ] 00:17:08.495 }' 00:17:08.495 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=651 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.755 "name": "raid_bdev1", 00:17:08.755 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:08.755 "strip_size_kb": 64, 00:17:08.755 "state": "online", 00:17:08.755 "raid_level": "raid5f", 00:17:08.755 "superblock": false, 00:17:08.755 "num_base_bdevs": 4, 00:17:08.755 "num_base_bdevs_discovered": 4, 00:17:08.755 "num_base_bdevs_operational": 4, 00:17:08.755 "process": { 00:17:08.755 "type": "rebuild", 00:17:08.755 "target": "spare", 00:17:08.755 "progress": { 00:17:08.755 "blocks": 21120, 00:17:08.755 "percent": 10 00:17:08.755 } 00:17:08.755 }, 00:17:08.755 "base_bdevs_list": [ 00:17:08.755 { 00:17:08.755 "name": "spare", 00:17:08.755 "uuid": "cffff2b8-40fe-52a0-8343-3638e2518dcc", 00:17:08.755 "is_configured": true, 00:17:08.755 "data_offset": 0, 00:17:08.755 "data_size": 65536 00:17:08.755 }, 00:17:08.755 { 00:17:08.755 "name": "BaseBdev2", 00:17:08.755 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:08.755 "is_configured": true, 00:17:08.755 "data_offset": 0, 00:17:08.755 "data_size": 65536 00:17:08.755 }, 00:17:08.755 { 00:17:08.755 "name": "BaseBdev3", 00:17:08.755 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:08.755 "is_configured": true, 00:17:08.755 "data_offset": 0, 00:17:08.755 "data_size": 65536 00:17:08.755 }, 00:17:08.755 { 00:17:08.755 "name": "BaseBdev4", 00:17:08.755 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:08.755 "is_configured": true, 00:17:08.755 "data_offset": 0, 00:17:08.755 "data_size": 65536 00:17:08.755 } 00:17:08.755 ] 00:17:08.755 }' 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.755 18:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.133 18:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.133 18:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.133 18:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.133 18:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.133 18:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.133 18:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.133 18:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.133 18:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.133 18:01:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.133 18:01:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.133 18:01:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.133 18:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.133 "name": "raid_bdev1", 00:17:10.133 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:10.133 "strip_size_kb": 64, 00:17:10.133 "state": "online", 00:17:10.133 "raid_level": "raid5f", 00:17:10.133 "superblock": false, 00:17:10.133 "num_base_bdevs": 4, 00:17:10.133 "num_base_bdevs_discovered": 4, 00:17:10.133 "num_base_bdevs_operational": 4, 00:17:10.133 "process": { 00:17:10.133 "type": "rebuild", 00:17:10.133 "target": "spare", 00:17:10.133 "progress": { 00:17:10.133 "blocks": 42240, 00:17:10.133 "percent": 21 00:17:10.133 } 00:17:10.133 }, 00:17:10.133 "base_bdevs_list": [ 00:17:10.133 { 00:17:10.133 "name": "spare", 00:17:10.133 "uuid": "cffff2b8-40fe-52a0-8343-3638e2518dcc", 00:17:10.133 "is_configured": true, 00:17:10.133 "data_offset": 0, 00:17:10.133 "data_size": 65536 00:17:10.133 }, 00:17:10.133 { 00:17:10.133 "name": "BaseBdev2", 00:17:10.133 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:10.133 "is_configured": true, 00:17:10.133 "data_offset": 0, 00:17:10.133 "data_size": 65536 00:17:10.133 }, 00:17:10.133 { 00:17:10.133 "name": "BaseBdev3", 00:17:10.134 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:10.134 "is_configured": true, 00:17:10.134 "data_offset": 0, 00:17:10.134 "data_size": 65536 00:17:10.134 }, 00:17:10.134 { 00:17:10.134 "name": "BaseBdev4", 00:17:10.134 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:10.134 "is_configured": true, 00:17:10.134 "data_offset": 0, 00:17:10.134 "data_size": 65536 00:17:10.134 } 00:17:10.134 ] 00:17:10.134 }' 00:17:10.134 18:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.134 18:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.134 18:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.134 18:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.134 18:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.105 18:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.106 "name": "raid_bdev1", 00:17:11.106 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:11.106 "strip_size_kb": 64, 00:17:11.106 "state": "online", 00:17:11.106 "raid_level": "raid5f", 00:17:11.106 "superblock": false, 00:17:11.106 "num_base_bdevs": 4, 00:17:11.106 "num_base_bdevs_discovered": 4, 00:17:11.106 "num_base_bdevs_operational": 4, 00:17:11.106 "process": { 00:17:11.106 "type": "rebuild", 00:17:11.106 "target": "spare", 00:17:11.106 "progress": { 00:17:11.106 "blocks": 65280, 00:17:11.106 "percent": 33 00:17:11.106 } 00:17:11.106 }, 00:17:11.106 "base_bdevs_list": [ 00:17:11.106 { 00:17:11.106 "name": "spare", 00:17:11.106 "uuid": "cffff2b8-40fe-52a0-8343-3638e2518dcc", 00:17:11.106 "is_configured": true, 00:17:11.106 "data_offset": 0, 00:17:11.106 "data_size": 65536 00:17:11.106 }, 00:17:11.106 { 00:17:11.106 "name": "BaseBdev2", 00:17:11.106 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:11.106 "is_configured": true, 00:17:11.106 "data_offset": 0, 00:17:11.106 "data_size": 65536 00:17:11.106 }, 00:17:11.106 { 00:17:11.106 "name": "BaseBdev3", 00:17:11.106 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:11.106 "is_configured": true, 00:17:11.106 "data_offset": 0, 00:17:11.106 "data_size": 65536 00:17:11.106 }, 00:17:11.106 { 00:17:11.106 "name": "BaseBdev4", 00:17:11.106 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:11.106 "is_configured": true, 00:17:11.106 "data_offset": 0, 00:17:11.106 "data_size": 65536 00:17:11.106 } 00:17:11.106 ] 00:17:11.106 }' 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.106 18:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.044 18:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.044 18:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.044 18:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.044 18:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.044 18:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.044 18:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.044 18:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.044 18:01:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.044 18:01:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.044 18:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.044 18:01:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.304 18:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.304 "name": "raid_bdev1", 00:17:12.304 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:12.304 "strip_size_kb": 64, 00:17:12.304 "state": "online", 00:17:12.304 "raid_level": "raid5f", 00:17:12.304 "superblock": false, 00:17:12.304 "num_base_bdevs": 4, 00:17:12.304 "num_base_bdevs_discovered": 4, 00:17:12.304 "num_base_bdevs_operational": 4, 00:17:12.304 "process": { 00:17:12.304 "type": "rebuild", 00:17:12.304 "target": "spare", 00:17:12.304 "progress": { 00:17:12.304 "blocks": 86400, 00:17:12.304 "percent": 43 00:17:12.304 } 00:17:12.304 }, 00:17:12.304 "base_bdevs_list": [ 00:17:12.304 { 00:17:12.304 "name": "spare", 00:17:12.304 "uuid": "cffff2b8-40fe-52a0-8343-3638e2518dcc", 00:17:12.304 "is_configured": true, 00:17:12.304 "data_offset": 0, 00:17:12.304 "data_size": 65536 00:17:12.304 }, 00:17:12.304 { 00:17:12.304 "name": "BaseBdev2", 00:17:12.304 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:12.304 "is_configured": true, 00:17:12.304 "data_offset": 0, 00:17:12.304 "data_size": 65536 00:17:12.304 }, 00:17:12.304 { 00:17:12.304 "name": "BaseBdev3", 00:17:12.304 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:12.304 "is_configured": true, 00:17:12.304 "data_offset": 0, 00:17:12.304 "data_size": 65536 00:17:12.304 }, 00:17:12.304 { 00:17:12.304 "name": "BaseBdev4", 00:17:12.304 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:12.304 "is_configured": true, 00:17:12.304 "data_offset": 0, 00:17:12.304 "data_size": 65536 00:17:12.304 } 00:17:12.305 ] 00:17:12.305 }' 00:17:12.305 18:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.305 18:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.305 18:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.305 18:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.305 18:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.243 18:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.243 18:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.243 18:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.243 18:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.243 18:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.243 18:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.243 18:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.243 18:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.243 18:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.243 18:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.243 18:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.243 18:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.243 "name": "raid_bdev1", 00:17:13.243 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:13.243 "strip_size_kb": 64, 00:17:13.243 "state": "online", 00:17:13.243 "raid_level": "raid5f", 00:17:13.243 "superblock": false, 00:17:13.243 "num_base_bdevs": 4, 00:17:13.243 "num_base_bdevs_discovered": 4, 00:17:13.243 "num_base_bdevs_operational": 4, 00:17:13.243 "process": { 00:17:13.243 "type": "rebuild", 00:17:13.243 "target": "spare", 00:17:13.243 "progress": { 00:17:13.243 "blocks": 107520, 00:17:13.243 "percent": 54 00:17:13.243 } 00:17:13.243 }, 00:17:13.243 "base_bdevs_list": [ 00:17:13.243 { 00:17:13.243 "name": "spare", 00:17:13.244 "uuid": "cffff2b8-40fe-52a0-8343-3638e2518dcc", 00:17:13.244 "is_configured": true, 00:17:13.244 "data_offset": 0, 00:17:13.244 "data_size": 65536 00:17:13.244 }, 00:17:13.244 { 00:17:13.244 "name": "BaseBdev2", 00:17:13.244 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:13.244 "is_configured": true, 00:17:13.244 "data_offset": 0, 00:17:13.244 "data_size": 65536 00:17:13.244 }, 00:17:13.244 { 00:17:13.244 "name": "BaseBdev3", 00:17:13.244 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:13.244 "is_configured": true, 00:17:13.244 "data_offset": 0, 00:17:13.244 "data_size": 65536 00:17:13.244 }, 00:17:13.244 { 00:17:13.244 "name": "BaseBdev4", 00:17:13.244 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:13.244 "is_configured": true, 00:17:13.244 "data_offset": 0, 00:17:13.244 "data_size": 65536 00:17:13.244 } 00:17:13.244 ] 00:17:13.244 }' 00:17:13.244 18:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.244 18:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.244 18:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.503 18:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.503 18:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.439 18:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.439 18:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.439 18:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.439 18:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.439 18:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.439 18:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.439 18:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.439 18:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.439 18:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.439 18:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.439 18:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.439 18:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.439 "name": "raid_bdev1", 00:17:14.439 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:14.439 "strip_size_kb": 64, 00:17:14.439 "state": "online", 00:17:14.439 "raid_level": "raid5f", 00:17:14.439 "superblock": false, 00:17:14.439 "num_base_bdevs": 4, 00:17:14.439 "num_base_bdevs_discovered": 4, 00:17:14.439 "num_base_bdevs_operational": 4, 00:17:14.439 "process": { 00:17:14.439 "type": "rebuild", 00:17:14.439 "target": "spare", 00:17:14.439 "progress": { 00:17:14.439 "blocks": 130560, 00:17:14.439 "percent": 66 00:17:14.439 } 00:17:14.439 }, 00:17:14.439 "base_bdevs_list": [ 00:17:14.439 { 00:17:14.439 "name": "spare", 00:17:14.439 "uuid": "cffff2b8-40fe-52a0-8343-3638e2518dcc", 00:17:14.439 "is_configured": true, 00:17:14.439 "data_offset": 0, 00:17:14.439 "data_size": 65536 00:17:14.439 }, 00:17:14.439 { 00:17:14.439 "name": "BaseBdev2", 00:17:14.439 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:14.439 "is_configured": true, 00:17:14.439 "data_offset": 0, 00:17:14.439 "data_size": 65536 00:17:14.439 }, 00:17:14.439 { 00:17:14.439 "name": "BaseBdev3", 00:17:14.439 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:14.439 "is_configured": true, 00:17:14.439 "data_offset": 0, 00:17:14.439 "data_size": 65536 00:17:14.439 }, 00:17:14.439 { 00:17:14.439 "name": "BaseBdev4", 00:17:14.439 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:14.439 "is_configured": true, 00:17:14.439 "data_offset": 0, 00:17:14.439 "data_size": 65536 00:17:14.439 } 00:17:14.439 ] 00:17:14.439 }' 00:17:14.439 18:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.439 18:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.439 18:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.698 18:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.698 18:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.635 "name": "raid_bdev1", 00:17:15.635 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:15.635 "strip_size_kb": 64, 00:17:15.635 "state": "online", 00:17:15.635 "raid_level": "raid5f", 00:17:15.635 "superblock": false, 00:17:15.635 "num_base_bdevs": 4, 00:17:15.635 "num_base_bdevs_discovered": 4, 00:17:15.635 "num_base_bdevs_operational": 4, 00:17:15.635 "process": { 00:17:15.635 "type": "rebuild", 00:17:15.635 "target": "spare", 00:17:15.635 "progress": { 00:17:15.635 "blocks": 151680, 00:17:15.635 "percent": 77 00:17:15.635 } 00:17:15.635 }, 00:17:15.635 "base_bdevs_list": [ 00:17:15.635 { 00:17:15.635 "name": "spare", 00:17:15.635 "uuid": "cffff2b8-40fe-52a0-8343-3638e2518dcc", 00:17:15.635 "is_configured": true, 00:17:15.635 "data_offset": 0, 00:17:15.635 "data_size": 65536 00:17:15.635 }, 00:17:15.635 { 00:17:15.635 "name": "BaseBdev2", 00:17:15.635 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:15.635 "is_configured": true, 00:17:15.635 "data_offset": 0, 00:17:15.635 "data_size": 65536 00:17:15.635 }, 00:17:15.635 { 00:17:15.635 "name": "BaseBdev3", 00:17:15.635 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:15.635 "is_configured": true, 00:17:15.635 "data_offset": 0, 00:17:15.635 "data_size": 65536 00:17:15.635 }, 00:17:15.635 { 00:17:15.635 "name": "BaseBdev4", 00:17:15.635 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:15.635 "is_configured": true, 00:17:15.635 "data_offset": 0, 00:17:15.635 "data_size": 65536 00:17:15.635 } 00:17:15.635 ] 00:17:15.635 }' 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.635 18:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.012 "name": "raid_bdev1", 00:17:17.012 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:17.012 "strip_size_kb": 64, 00:17:17.012 "state": "online", 00:17:17.012 "raid_level": "raid5f", 00:17:17.012 "superblock": false, 00:17:17.012 "num_base_bdevs": 4, 00:17:17.012 "num_base_bdevs_discovered": 4, 00:17:17.012 "num_base_bdevs_operational": 4, 00:17:17.012 "process": { 00:17:17.012 "type": "rebuild", 00:17:17.012 "target": "spare", 00:17:17.012 "progress": { 00:17:17.012 "blocks": 172800, 00:17:17.012 "percent": 87 00:17:17.012 } 00:17:17.012 }, 00:17:17.012 "base_bdevs_list": [ 00:17:17.012 { 00:17:17.012 "name": "spare", 00:17:17.012 "uuid": "cffff2b8-40fe-52a0-8343-3638e2518dcc", 00:17:17.012 "is_configured": true, 00:17:17.012 "data_offset": 0, 00:17:17.012 "data_size": 65536 00:17:17.012 }, 00:17:17.012 { 00:17:17.012 "name": "BaseBdev2", 00:17:17.012 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:17.012 "is_configured": true, 00:17:17.012 "data_offset": 0, 00:17:17.012 "data_size": 65536 00:17:17.012 }, 00:17:17.012 { 00:17:17.012 "name": "BaseBdev3", 00:17:17.012 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:17.012 "is_configured": true, 00:17:17.012 "data_offset": 0, 00:17:17.012 "data_size": 65536 00:17:17.012 }, 00:17:17.012 { 00:17:17.012 "name": "BaseBdev4", 00:17:17.012 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:17.012 "is_configured": true, 00:17:17.012 "data_offset": 0, 00:17:17.012 "data_size": 65536 00:17:17.012 } 00:17:17.012 ] 00:17:17.012 }' 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.012 18:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.950 "name": "raid_bdev1", 00:17:17.950 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:17.950 "strip_size_kb": 64, 00:17:17.950 "state": "online", 00:17:17.950 "raid_level": "raid5f", 00:17:17.950 "superblock": false, 00:17:17.950 "num_base_bdevs": 4, 00:17:17.950 "num_base_bdevs_discovered": 4, 00:17:17.950 "num_base_bdevs_operational": 4, 00:17:17.950 "process": { 00:17:17.950 "type": "rebuild", 00:17:17.950 "target": "spare", 00:17:17.950 "progress": { 00:17:17.950 "blocks": 195840, 00:17:17.950 "percent": 99 00:17:17.950 } 00:17:17.950 }, 00:17:17.950 "base_bdevs_list": [ 00:17:17.950 { 00:17:17.950 "name": "spare", 00:17:17.950 "uuid": "cffff2b8-40fe-52a0-8343-3638e2518dcc", 00:17:17.950 "is_configured": true, 00:17:17.950 "data_offset": 0, 00:17:17.950 "data_size": 65536 00:17:17.950 }, 00:17:17.950 { 00:17:17.950 "name": "BaseBdev2", 00:17:17.950 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:17.950 "is_configured": true, 00:17:17.950 "data_offset": 0, 00:17:17.950 "data_size": 65536 00:17:17.950 }, 00:17:17.950 { 00:17:17.950 "name": "BaseBdev3", 00:17:17.950 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:17.950 "is_configured": true, 00:17:17.950 "data_offset": 0, 00:17:17.950 "data_size": 65536 00:17:17.950 }, 00:17:17.950 { 00:17:17.950 "name": "BaseBdev4", 00:17:17.950 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:17.950 "is_configured": true, 00:17:17.950 "data_offset": 0, 00:17:17.950 "data_size": 65536 00:17:17.950 } 00:17:17.950 ] 00:17:17.950 }' 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.950 [2024-11-26 18:01:59.679070] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:17.950 [2024-11-26 18:01:59.679212] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:17.950 [2024-11-26 18:01:59.679295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.950 18:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:18.887 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.887 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.887 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.887 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.887 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.887 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.887 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.887 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.887 18:02:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.887 18:02:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.887 18:02:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.146 "name": "raid_bdev1", 00:17:19.146 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:19.146 "strip_size_kb": 64, 00:17:19.146 "state": "online", 00:17:19.146 "raid_level": "raid5f", 00:17:19.146 "superblock": false, 00:17:19.146 "num_base_bdevs": 4, 00:17:19.146 "num_base_bdevs_discovered": 4, 00:17:19.146 "num_base_bdevs_operational": 4, 00:17:19.146 "base_bdevs_list": [ 00:17:19.146 { 00:17:19.146 "name": "spare", 00:17:19.146 "uuid": "cffff2b8-40fe-52a0-8343-3638e2518dcc", 00:17:19.146 "is_configured": true, 00:17:19.146 "data_offset": 0, 00:17:19.146 "data_size": 65536 00:17:19.146 }, 00:17:19.146 { 00:17:19.146 "name": "BaseBdev2", 00:17:19.146 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:19.146 "is_configured": true, 00:17:19.146 "data_offset": 0, 00:17:19.146 "data_size": 65536 00:17:19.146 }, 00:17:19.146 { 00:17:19.146 "name": "BaseBdev3", 00:17:19.146 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:19.146 "is_configured": true, 00:17:19.146 "data_offset": 0, 00:17:19.146 "data_size": 65536 00:17:19.146 }, 00:17:19.146 { 00:17:19.146 "name": "BaseBdev4", 00:17:19.146 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:19.146 "is_configured": true, 00:17:19.146 "data_offset": 0, 00:17:19.146 "data_size": 65536 00:17:19.146 } 00:17:19.146 ] 00:17:19.146 }' 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.146 "name": "raid_bdev1", 00:17:19.146 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:19.146 "strip_size_kb": 64, 00:17:19.146 "state": "online", 00:17:19.146 "raid_level": "raid5f", 00:17:19.146 "superblock": false, 00:17:19.146 "num_base_bdevs": 4, 00:17:19.146 "num_base_bdevs_discovered": 4, 00:17:19.146 "num_base_bdevs_operational": 4, 00:17:19.146 "base_bdevs_list": [ 00:17:19.146 { 00:17:19.146 "name": "spare", 00:17:19.146 "uuid": "cffff2b8-40fe-52a0-8343-3638e2518dcc", 00:17:19.146 "is_configured": true, 00:17:19.146 "data_offset": 0, 00:17:19.146 "data_size": 65536 00:17:19.146 }, 00:17:19.146 { 00:17:19.146 "name": "BaseBdev2", 00:17:19.146 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:19.146 "is_configured": true, 00:17:19.146 "data_offset": 0, 00:17:19.146 "data_size": 65536 00:17:19.146 }, 00:17:19.146 { 00:17:19.146 "name": "BaseBdev3", 00:17:19.146 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:19.146 "is_configured": true, 00:17:19.146 "data_offset": 0, 00:17:19.146 "data_size": 65536 00:17:19.146 }, 00:17:19.146 { 00:17:19.146 "name": "BaseBdev4", 00:17:19.146 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:19.146 "is_configured": true, 00:17:19.146 "data_offset": 0, 00:17:19.146 "data_size": 65536 00:17:19.146 } 00:17:19.146 ] 00:17:19.146 }' 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.146 18:02:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.405 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.405 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.405 "name": "raid_bdev1", 00:17:19.405 "uuid": "628ffe58-875a-486f-9f0a-03f6d4456933", 00:17:19.405 "strip_size_kb": 64, 00:17:19.405 "state": "online", 00:17:19.405 "raid_level": "raid5f", 00:17:19.405 "superblock": false, 00:17:19.405 "num_base_bdevs": 4, 00:17:19.405 "num_base_bdevs_discovered": 4, 00:17:19.405 "num_base_bdevs_operational": 4, 00:17:19.405 "base_bdevs_list": [ 00:17:19.405 { 00:17:19.405 "name": "spare", 00:17:19.405 "uuid": "cffff2b8-40fe-52a0-8343-3638e2518dcc", 00:17:19.405 "is_configured": true, 00:17:19.405 "data_offset": 0, 00:17:19.405 "data_size": 65536 00:17:19.405 }, 00:17:19.405 { 00:17:19.405 "name": "BaseBdev2", 00:17:19.405 "uuid": "cce18395-8e60-5f9a-a1b7-f8686cc2cf96", 00:17:19.405 "is_configured": true, 00:17:19.405 "data_offset": 0, 00:17:19.405 "data_size": 65536 00:17:19.405 }, 00:17:19.405 { 00:17:19.405 "name": "BaseBdev3", 00:17:19.405 "uuid": "a341c3d7-00ca-5477-b1ea-16bff341f553", 00:17:19.405 "is_configured": true, 00:17:19.405 "data_offset": 0, 00:17:19.405 "data_size": 65536 00:17:19.405 }, 00:17:19.405 { 00:17:19.405 "name": "BaseBdev4", 00:17:19.405 "uuid": "7dd79bbc-36e8-58b0-af0f-237dd910bafe", 00:17:19.405 "is_configured": true, 00:17:19.405 "data_offset": 0, 00:17:19.405 "data_size": 65536 00:17:19.405 } 00:17:19.405 ] 00:17:19.405 }' 00:17:19.405 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.405 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.664 [2024-11-26 18:02:01.440350] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.664 [2024-11-26 18:02:01.440462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.664 [2024-11-26 18:02:01.440611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.664 [2024-11-26 18:02:01.440739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.664 [2024-11-26 18:02:01.440754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.664 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:19.923 /dev/nbd0 00:17:19.923 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:19.923 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:19.923 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:19.923 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:19.923 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:19.923 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:19.923 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:19.923 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:19.923 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:19.923 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:19.923 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.923 1+0 records in 00:17:19.923 1+0 records out 00:17:19.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379145 s, 10.8 MB/s 00:17:19.923 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.182 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:20.182 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.182 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:20.182 18:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:20.182 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:20.182 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:20.182 18:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:20.182 /dev/nbd1 00:17:20.182 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:20.442 1+0 records in 00:17:20.442 1+0 records out 00:17:20.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433297 s, 9.5 MB/s 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.442 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:20.702 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:20.702 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:20.702 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:20.702 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.702 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.702 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:20.702 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:20.702 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.702 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.702 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:20.960 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:20.960 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:20.960 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:20.960 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.960 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.960 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:20.960 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:20.960 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.960 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:20.960 18:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85022 00:17:20.960 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85022 ']' 00:17:20.960 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85022 00:17:20.960 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:20.960 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.960 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85022 00:17:21.219 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.219 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.219 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85022' 00:17:21.219 killing process with pid 85022 00:17:21.219 Received shutdown signal, test time was about 60.000000 seconds 00:17:21.219 00:17:21.219 Latency(us) 00:17:21.219 [2024-11-26T18:02:03.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.219 [2024-11-26T18:02:03.082Z] =================================================================================================================== 00:17:21.219 [2024-11-26T18:02:03.082Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.219 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85022 00:17:21.219 [2024-11-26 18:02:02.835111] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.219 18:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85022 00:17:21.784 [2024-11-26 18:02:03.404170] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.159 18:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:23.159 00:17:23.159 real 0m20.722s 00:17:23.159 user 0m24.583s 00:17:23.159 sys 0m2.515s 00:17:23.159 18:02:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.160 ************************************ 00:17:23.160 END TEST raid5f_rebuild_test 00:17:23.160 ************************************ 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.160 18:02:04 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:23.160 18:02:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:23.160 18:02:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.160 18:02:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.160 ************************************ 00:17:23.160 START TEST raid5f_rebuild_test_sb 00:17:23.160 ************************************ 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85545 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85545 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85545 ']' 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.160 18:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.160 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:23.160 Zero copy mechanism will not be used. 00:17:23.160 [2024-11-26 18:02:04.856632] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:17:23.160 [2024-11-26 18:02:04.856764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85545 ] 00:17:23.160 [2024-11-26 18:02:05.020407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.418 [2024-11-26 18:02:05.160367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.678 [2024-11-26 18:02:05.393456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.678 [2024-11-26 18:02:05.393650] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.246 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.247 BaseBdev1_malloc 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.247 [2024-11-26 18:02:05.859700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:24.247 [2024-11-26 18:02:05.860190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.247 [2024-11-26 18:02:05.860332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:24.247 [2024-11-26 18:02:05.860432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.247 [2024-11-26 18:02:05.862950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.247 [2024-11-26 18:02:05.863163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:24.247 BaseBdev1 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.247 BaseBdev2_malloc 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.247 [2024-11-26 18:02:05.921592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:24.247 [2024-11-26 18:02:05.921873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.247 [2024-11-26 18:02:05.921948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:24.247 [2024-11-26 18:02:05.922008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.247 [2024-11-26 18:02:05.924454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.247 [2024-11-26 18:02:05.924553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:24.247 BaseBdev2 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.247 BaseBdev3_malloc 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.247 [2024-11-26 18:02:05.995783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:24.247 [2024-11-26 18:02:05.996070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.247 [2024-11-26 18:02:05.996168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:24.247 [2024-11-26 18:02:05.996276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.247 [2024-11-26 18:02:05.998793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.247 [2024-11-26 18:02:05.998948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:24.247 BaseBdev3 00:17:24.247 18:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.247 BaseBdev4_malloc 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.247 [2024-11-26 18:02:06.054081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:24.247 [2024-11-26 18:02:06.054346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.247 [2024-11-26 18:02:06.054431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:24.247 [2024-11-26 18:02:06.054529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.247 [2024-11-26 18:02:06.056839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.247 [2024-11-26 18:02:06.056925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:24.247 BaseBdev4 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.247 spare_malloc 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.247 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.507 spare_delay 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.507 [2024-11-26 18:02:06.123976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:24.507 [2024-11-26 18:02:06.124094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.507 [2024-11-26 18:02:06.124134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:24.507 [2024-11-26 18:02:06.124168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.507 [2024-11-26 18:02:06.126481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.507 [2024-11-26 18:02:06.126525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:24.507 spare 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.507 [2024-11-26 18:02:06.136009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.507 [2024-11-26 18:02:06.138105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.507 [2024-11-26 18:02:06.138176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:24.507 [2024-11-26 18:02:06.138234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:24.507 [2024-11-26 18:02:06.138441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:24.507 [2024-11-26 18:02:06.138459] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:24.507 [2024-11-26 18:02:06.138763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:24.507 [2024-11-26 18:02:06.147914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:24.507 [2024-11-26 18:02:06.147984] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:24.507 [2024-11-26 18:02:06.148270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.507 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.507 "name": "raid_bdev1", 00:17:24.507 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:24.507 "strip_size_kb": 64, 00:17:24.507 "state": "online", 00:17:24.507 "raid_level": "raid5f", 00:17:24.507 "superblock": true, 00:17:24.507 "num_base_bdevs": 4, 00:17:24.507 "num_base_bdevs_discovered": 4, 00:17:24.507 "num_base_bdevs_operational": 4, 00:17:24.507 "base_bdevs_list": [ 00:17:24.507 { 00:17:24.507 "name": "BaseBdev1", 00:17:24.507 "uuid": "aa1288f1-88ca-599e-b2c0-4a238d716123", 00:17:24.507 "is_configured": true, 00:17:24.507 "data_offset": 2048, 00:17:24.507 "data_size": 63488 00:17:24.507 }, 00:17:24.507 { 00:17:24.507 "name": "BaseBdev2", 00:17:24.507 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:24.507 "is_configured": true, 00:17:24.507 "data_offset": 2048, 00:17:24.507 "data_size": 63488 00:17:24.507 }, 00:17:24.507 { 00:17:24.507 "name": "BaseBdev3", 00:17:24.507 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:24.507 "is_configured": true, 00:17:24.507 "data_offset": 2048, 00:17:24.507 "data_size": 63488 00:17:24.507 }, 00:17:24.507 { 00:17:24.507 "name": "BaseBdev4", 00:17:24.507 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:24.507 "is_configured": true, 00:17:24.507 "data_offset": 2048, 00:17:24.507 "data_size": 63488 00:17:24.507 } 00:17:24.507 ] 00:17:24.507 }' 00:17:24.508 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.508 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.767 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:24.767 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:24.767 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.767 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.767 [2024-11-26 18:02:06.553950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.767 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.768 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:24.768 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.768 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:24.768 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.768 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.768 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.027 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:25.027 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:25.027 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:25.027 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:25.027 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:25.027 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:25.027 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:25.027 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:25.027 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:25.027 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:25.027 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:25.027 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:25.027 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:25.027 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:25.027 [2024-11-26 18:02:06.845736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:25.027 /dev/nbd0 00:17:25.287 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:25.287 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:25.287 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:25.287 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:25.287 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:25.287 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:25.287 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:25.288 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:25.288 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:25.288 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:25.288 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.288 1+0 records in 00:17:25.288 1+0 records out 00:17:25.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368264 s, 11.1 MB/s 00:17:25.288 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.288 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:25.288 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.288 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:25.288 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:25.288 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:25.288 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:25.288 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:25.288 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:25.288 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:25.288 18:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:25.858 496+0 records in 00:17:25.858 496+0 records out 00:17:25.858 97517568 bytes (98 MB, 93 MiB) copied, 0.562521 s, 173 MB/s 00:17:25.858 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:25.858 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:25.858 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:25.858 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:25.858 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:25.858 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.858 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:25.858 [2024-11-26 18:02:07.714057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.116 [2024-11-26 18:02:07.753640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.116 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.117 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.117 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.117 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.117 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.117 "name": "raid_bdev1", 00:17:26.117 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:26.117 "strip_size_kb": 64, 00:17:26.117 "state": "online", 00:17:26.117 "raid_level": "raid5f", 00:17:26.117 "superblock": true, 00:17:26.117 "num_base_bdevs": 4, 00:17:26.117 "num_base_bdevs_discovered": 3, 00:17:26.117 "num_base_bdevs_operational": 3, 00:17:26.117 "base_bdevs_list": [ 00:17:26.117 { 00:17:26.117 "name": null, 00:17:26.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.117 "is_configured": false, 00:17:26.117 "data_offset": 0, 00:17:26.117 "data_size": 63488 00:17:26.117 }, 00:17:26.117 { 00:17:26.117 "name": "BaseBdev2", 00:17:26.117 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:26.117 "is_configured": true, 00:17:26.117 "data_offset": 2048, 00:17:26.117 "data_size": 63488 00:17:26.117 }, 00:17:26.117 { 00:17:26.117 "name": "BaseBdev3", 00:17:26.117 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:26.117 "is_configured": true, 00:17:26.117 "data_offset": 2048, 00:17:26.117 "data_size": 63488 00:17:26.117 }, 00:17:26.117 { 00:17:26.117 "name": "BaseBdev4", 00:17:26.117 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:26.117 "is_configured": true, 00:17:26.117 "data_offset": 2048, 00:17:26.117 "data_size": 63488 00:17:26.117 } 00:17:26.117 ] 00:17:26.117 }' 00:17:26.117 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.117 18:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.375 18:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:26.375 18:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.375 18:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.375 [2024-11-26 18:02:08.200955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.375 [2024-11-26 18:02:08.221993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:26.375 18:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.375 18:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:26.375 [2024-11-26 18:02:08.233868] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.757 "name": "raid_bdev1", 00:17:27.757 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:27.757 "strip_size_kb": 64, 00:17:27.757 "state": "online", 00:17:27.757 "raid_level": "raid5f", 00:17:27.757 "superblock": true, 00:17:27.757 "num_base_bdevs": 4, 00:17:27.757 "num_base_bdevs_discovered": 4, 00:17:27.757 "num_base_bdevs_operational": 4, 00:17:27.757 "process": { 00:17:27.757 "type": "rebuild", 00:17:27.757 "target": "spare", 00:17:27.757 "progress": { 00:17:27.757 "blocks": 19200, 00:17:27.757 "percent": 10 00:17:27.757 } 00:17:27.757 }, 00:17:27.757 "base_bdevs_list": [ 00:17:27.757 { 00:17:27.757 "name": "spare", 00:17:27.757 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:27.757 "is_configured": true, 00:17:27.757 "data_offset": 2048, 00:17:27.757 "data_size": 63488 00:17:27.757 }, 00:17:27.757 { 00:17:27.757 "name": "BaseBdev2", 00:17:27.757 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:27.757 "is_configured": true, 00:17:27.757 "data_offset": 2048, 00:17:27.757 "data_size": 63488 00:17:27.757 }, 00:17:27.757 { 00:17:27.757 "name": "BaseBdev3", 00:17:27.757 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:27.757 "is_configured": true, 00:17:27.757 "data_offset": 2048, 00:17:27.757 "data_size": 63488 00:17:27.757 }, 00:17:27.757 { 00:17:27.757 "name": "BaseBdev4", 00:17:27.757 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:27.757 "is_configured": true, 00:17:27.757 "data_offset": 2048, 00:17:27.757 "data_size": 63488 00:17:27.757 } 00:17:27.757 ] 00:17:27.757 }' 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.757 [2024-11-26 18:02:09.377731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.757 [2024-11-26 18:02:09.443745] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:27.757 [2024-11-26 18:02:09.443853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.757 [2024-11-26 18:02:09.443874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.757 [2024-11-26 18:02:09.443886] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.757 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.757 "name": "raid_bdev1", 00:17:27.757 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:27.757 "strip_size_kb": 64, 00:17:27.757 "state": "online", 00:17:27.757 "raid_level": "raid5f", 00:17:27.757 "superblock": true, 00:17:27.757 "num_base_bdevs": 4, 00:17:27.757 "num_base_bdevs_discovered": 3, 00:17:27.757 "num_base_bdevs_operational": 3, 00:17:27.757 "base_bdevs_list": [ 00:17:27.757 { 00:17:27.757 "name": null, 00:17:27.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.758 "is_configured": false, 00:17:27.758 "data_offset": 0, 00:17:27.758 "data_size": 63488 00:17:27.758 }, 00:17:27.758 { 00:17:27.758 "name": "BaseBdev2", 00:17:27.758 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:27.758 "is_configured": true, 00:17:27.758 "data_offset": 2048, 00:17:27.758 "data_size": 63488 00:17:27.758 }, 00:17:27.758 { 00:17:27.758 "name": "BaseBdev3", 00:17:27.758 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:27.758 "is_configured": true, 00:17:27.758 "data_offset": 2048, 00:17:27.758 "data_size": 63488 00:17:27.758 }, 00:17:27.758 { 00:17:27.758 "name": "BaseBdev4", 00:17:27.758 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:27.758 "is_configured": true, 00:17:27.758 "data_offset": 2048, 00:17:27.758 "data_size": 63488 00:17:27.758 } 00:17:27.758 ] 00:17:27.758 }' 00:17:27.758 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.758 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.326 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:28.326 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.326 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:28.326 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:28.326 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.326 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.326 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.326 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.326 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.326 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.326 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.326 "name": "raid_bdev1", 00:17:28.326 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:28.326 "strip_size_kb": 64, 00:17:28.326 "state": "online", 00:17:28.326 "raid_level": "raid5f", 00:17:28.326 "superblock": true, 00:17:28.326 "num_base_bdevs": 4, 00:17:28.326 "num_base_bdevs_discovered": 3, 00:17:28.326 "num_base_bdevs_operational": 3, 00:17:28.326 "base_bdevs_list": [ 00:17:28.326 { 00:17:28.326 "name": null, 00:17:28.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.326 "is_configured": false, 00:17:28.326 "data_offset": 0, 00:17:28.326 "data_size": 63488 00:17:28.326 }, 00:17:28.326 { 00:17:28.326 "name": "BaseBdev2", 00:17:28.326 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:28.326 "is_configured": true, 00:17:28.326 "data_offset": 2048, 00:17:28.326 "data_size": 63488 00:17:28.326 }, 00:17:28.326 { 00:17:28.326 "name": "BaseBdev3", 00:17:28.326 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:28.326 "is_configured": true, 00:17:28.326 "data_offset": 2048, 00:17:28.326 "data_size": 63488 00:17:28.326 }, 00:17:28.326 { 00:17:28.326 "name": "BaseBdev4", 00:17:28.326 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:28.326 "is_configured": true, 00:17:28.326 "data_offset": 2048, 00:17:28.326 "data_size": 63488 00:17:28.326 } 00:17:28.326 ] 00:17:28.326 }' 00:17:28.326 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.326 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:28.326 18:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.326 18:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:28.326 18:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:28.327 18:02:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.327 18:02:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.327 [2024-11-26 18:02:10.040345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:28.327 [2024-11-26 18:02:10.058441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:28.327 18:02:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.327 18:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:28.327 [2024-11-26 18:02:10.069878] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:29.265 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.265 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.265 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.265 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.265 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.265 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.265 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.265 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.265 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.265 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.265 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.266 "name": "raid_bdev1", 00:17:29.266 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:29.266 "strip_size_kb": 64, 00:17:29.266 "state": "online", 00:17:29.266 "raid_level": "raid5f", 00:17:29.266 "superblock": true, 00:17:29.266 "num_base_bdevs": 4, 00:17:29.266 "num_base_bdevs_discovered": 4, 00:17:29.266 "num_base_bdevs_operational": 4, 00:17:29.266 "process": { 00:17:29.266 "type": "rebuild", 00:17:29.266 "target": "spare", 00:17:29.266 "progress": { 00:17:29.266 "blocks": 17280, 00:17:29.266 "percent": 9 00:17:29.266 } 00:17:29.266 }, 00:17:29.266 "base_bdevs_list": [ 00:17:29.266 { 00:17:29.266 "name": "spare", 00:17:29.266 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:29.266 "is_configured": true, 00:17:29.266 "data_offset": 2048, 00:17:29.266 "data_size": 63488 00:17:29.266 }, 00:17:29.266 { 00:17:29.266 "name": "BaseBdev2", 00:17:29.266 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:29.266 "is_configured": true, 00:17:29.266 "data_offset": 2048, 00:17:29.266 "data_size": 63488 00:17:29.266 }, 00:17:29.266 { 00:17:29.266 "name": "BaseBdev3", 00:17:29.266 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:29.266 "is_configured": true, 00:17:29.266 "data_offset": 2048, 00:17:29.266 "data_size": 63488 00:17:29.266 }, 00:17:29.266 { 00:17:29.266 "name": "BaseBdev4", 00:17:29.266 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:29.266 "is_configured": true, 00:17:29.266 "data_offset": 2048, 00:17:29.266 "data_size": 63488 00:17:29.266 } 00:17:29.266 ] 00:17:29.266 }' 00:17:29.266 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:29.525 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=672 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.525 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.525 "name": "raid_bdev1", 00:17:29.525 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:29.525 "strip_size_kb": 64, 00:17:29.525 "state": "online", 00:17:29.525 "raid_level": "raid5f", 00:17:29.525 "superblock": true, 00:17:29.525 "num_base_bdevs": 4, 00:17:29.525 "num_base_bdevs_discovered": 4, 00:17:29.525 "num_base_bdevs_operational": 4, 00:17:29.525 "process": { 00:17:29.525 "type": "rebuild", 00:17:29.525 "target": "spare", 00:17:29.525 "progress": { 00:17:29.525 "blocks": 21120, 00:17:29.525 "percent": 11 00:17:29.525 } 00:17:29.525 }, 00:17:29.525 "base_bdevs_list": [ 00:17:29.525 { 00:17:29.525 "name": "spare", 00:17:29.525 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:29.525 "is_configured": true, 00:17:29.525 "data_offset": 2048, 00:17:29.525 "data_size": 63488 00:17:29.525 }, 00:17:29.525 { 00:17:29.525 "name": "BaseBdev2", 00:17:29.525 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:29.525 "is_configured": true, 00:17:29.525 "data_offset": 2048, 00:17:29.525 "data_size": 63488 00:17:29.525 }, 00:17:29.525 { 00:17:29.525 "name": "BaseBdev3", 00:17:29.525 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:29.526 "is_configured": true, 00:17:29.526 "data_offset": 2048, 00:17:29.526 "data_size": 63488 00:17:29.526 }, 00:17:29.526 { 00:17:29.526 "name": "BaseBdev4", 00:17:29.526 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:29.526 "is_configured": true, 00:17:29.526 "data_offset": 2048, 00:17:29.526 "data_size": 63488 00:17:29.526 } 00:17:29.526 ] 00:17:29.526 }' 00:17:29.526 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.526 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.526 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.526 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.526 18:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.902 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.902 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.902 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.902 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.902 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.902 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.903 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.903 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.903 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.903 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.903 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.903 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.903 "name": "raid_bdev1", 00:17:30.903 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:30.903 "strip_size_kb": 64, 00:17:30.903 "state": "online", 00:17:30.903 "raid_level": "raid5f", 00:17:30.903 "superblock": true, 00:17:30.903 "num_base_bdevs": 4, 00:17:30.903 "num_base_bdevs_discovered": 4, 00:17:30.903 "num_base_bdevs_operational": 4, 00:17:30.903 "process": { 00:17:30.903 "type": "rebuild", 00:17:30.903 "target": "spare", 00:17:30.903 "progress": { 00:17:30.903 "blocks": 44160, 00:17:30.903 "percent": 23 00:17:30.903 } 00:17:30.903 }, 00:17:30.903 "base_bdevs_list": [ 00:17:30.903 { 00:17:30.903 "name": "spare", 00:17:30.903 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:30.903 "is_configured": true, 00:17:30.903 "data_offset": 2048, 00:17:30.903 "data_size": 63488 00:17:30.903 }, 00:17:30.903 { 00:17:30.903 "name": "BaseBdev2", 00:17:30.903 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:30.903 "is_configured": true, 00:17:30.903 "data_offset": 2048, 00:17:30.903 "data_size": 63488 00:17:30.903 }, 00:17:30.903 { 00:17:30.903 "name": "BaseBdev3", 00:17:30.903 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:30.903 "is_configured": true, 00:17:30.903 "data_offset": 2048, 00:17:30.903 "data_size": 63488 00:17:30.903 }, 00:17:30.903 { 00:17:30.903 "name": "BaseBdev4", 00:17:30.903 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:30.903 "is_configured": true, 00:17:30.903 "data_offset": 2048, 00:17:30.903 "data_size": 63488 00:17:30.903 } 00:17:30.903 ] 00:17:30.903 }' 00:17:30.903 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.903 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.903 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.903 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.903 18:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.840 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.840 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.840 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.841 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.841 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.841 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.841 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.841 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.841 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.841 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.841 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.841 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.841 "name": "raid_bdev1", 00:17:31.841 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:31.841 "strip_size_kb": 64, 00:17:31.841 "state": "online", 00:17:31.841 "raid_level": "raid5f", 00:17:31.841 "superblock": true, 00:17:31.841 "num_base_bdevs": 4, 00:17:31.841 "num_base_bdevs_discovered": 4, 00:17:31.841 "num_base_bdevs_operational": 4, 00:17:31.841 "process": { 00:17:31.841 "type": "rebuild", 00:17:31.841 "target": "spare", 00:17:31.841 "progress": { 00:17:31.841 "blocks": 65280, 00:17:31.841 "percent": 34 00:17:31.841 } 00:17:31.841 }, 00:17:31.841 "base_bdevs_list": [ 00:17:31.841 { 00:17:31.841 "name": "spare", 00:17:31.841 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:31.841 "is_configured": true, 00:17:31.841 "data_offset": 2048, 00:17:31.841 "data_size": 63488 00:17:31.841 }, 00:17:31.841 { 00:17:31.841 "name": "BaseBdev2", 00:17:31.841 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:31.841 "is_configured": true, 00:17:31.841 "data_offset": 2048, 00:17:31.841 "data_size": 63488 00:17:31.841 }, 00:17:31.841 { 00:17:31.841 "name": "BaseBdev3", 00:17:31.841 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:31.841 "is_configured": true, 00:17:31.841 "data_offset": 2048, 00:17:31.841 "data_size": 63488 00:17:31.841 }, 00:17:31.841 { 00:17:31.841 "name": "BaseBdev4", 00:17:31.841 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:31.841 "is_configured": true, 00:17:31.841 "data_offset": 2048, 00:17:31.841 "data_size": 63488 00:17:31.841 } 00:17:31.841 ] 00:17:31.841 }' 00:17:31.841 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.841 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.841 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.841 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:31.841 18:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.215 "name": "raid_bdev1", 00:17:33.215 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:33.215 "strip_size_kb": 64, 00:17:33.215 "state": "online", 00:17:33.215 "raid_level": "raid5f", 00:17:33.215 "superblock": true, 00:17:33.215 "num_base_bdevs": 4, 00:17:33.215 "num_base_bdevs_discovered": 4, 00:17:33.215 "num_base_bdevs_operational": 4, 00:17:33.215 "process": { 00:17:33.215 "type": "rebuild", 00:17:33.215 "target": "spare", 00:17:33.215 "progress": { 00:17:33.215 "blocks": 88320, 00:17:33.215 "percent": 46 00:17:33.215 } 00:17:33.215 }, 00:17:33.215 "base_bdevs_list": [ 00:17:33.215 { 00:17:33.215 "name": "spare", 00:17:33.215 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:33.215 "is_configured": true, 00:17:33.215 "data_offset": 2048, 00:17:33.215 "data_size": 63488 00:17:33.215 }, 00:17:33.215 { 00:17:33.215 "name": "BaseBdev2", 00:17:33.215 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:33.215 "is_configured": true, 00:17:33.215 "data_offset": 2048, 00:17:33.215 "data_size": 63488 00:17:33.215 }, 00:17:33.215 { 00:17:33.215 "name": "BaseBdev3", 00:17:33.215 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:33.215 "is_configured": true, 00:17:33.215 "data_offset": 2048, 00:17:33.215 "data_size": 63488 00:17:33.215 }, 00:17:33.215 { 00:17:33.215 "name": "BaseBdev4", 00:17:33.215 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:33.215 "is_configured": true, 00:17:33.215 "data_offset": 2048, 00:17:33.215 "data_size": 63488 00:17:33.215 } 00:17:33.215 ] 00:17:33.215 }' 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.215 18:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:34.148 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:34.148 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.148 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.148 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.148 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.148 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.148 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.148 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.148 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.148 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.148 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.148 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.148 "name": "raid_bdev1", 00:17:34.148 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:34.148 "strip_size_kb": 64, 00:17:34.148 "state": "online", 00:17:34.148 "raid_level": "raid5f", 00:17:34.148 "superblock": true, 00:17:34.148 "num_base_bdevs": 4, 00:17:34.148 "num_base_bdevs_discovered": 4, 00:17:34.148 "num_base_bdevs_operational": 4, 00:17:34.148 "process": { 00:17:34.148 "type": "rebuild", 00:17:34.148 "target": "spare", 00:17:34.148 "progress": { 00:17:34.148 "blocks": 109440, 00:17:34.148 "percent": 57 00:17:34.148 } 00:17:34.148 }, 00:17:34.149 "base_bdevs_list": [ 00:17:34.149 { 00:17:34.149 "name": "spare", 00:17:34.149 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:34.149 "is_configured": true, 00:17:34.149 "data_offset": 2048, 00:17:34.149 "data_size": 63488 00:17:34.149 }, 00:17:34.149 { 00:17:34.149 "name": "BaseBdev2", 00:17:34.149 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:34.149 "is_configured": true, 00:17:34.149 "data_offset": 2048, 00:17:34.149 "data_size": 63488 00:17:34.149 }, 00:17:34.149 { 00:17:34.149 "name": "BaseBdev3", 00:17:34.149 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:34.149 "is_configured": true, 00:17:34.149 "data_offset": 2048, 00:17:34.149 "data_size": 63488 00:17:34.149 }, 00:17:34.149 { 00:17:34.149 "name": "BaseBdev4", 00:17:34.149 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:34.149 "is_configured": true, 00:17:34.149 "data_offset": 2048, 00:17:34.149 "data_size": 63488 00:17:34.149 } 00:17:34.149 ] 00:17:34.149 }' 00:17:34.149 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.149 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.149 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.149 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.149 18:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:35.538 18:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.538 18:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.538 18:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.538 18:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.538 18:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.538 18:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.538 18:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.538 18:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.538 18:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.538 18:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.538 18:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.538 18:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.538 "name": "raid_bdev1", 00:17:35.538 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:35.538 "strip_size_kb": 64, 00:17:35.538 "state": "online", 00:17:35.538 "raid_level": "raid5f", 00:17:35.538 "superblock": true, 00:17:35.538 "num_base_bdevs": 4, 00:17:35.538 "num_base_bdevs_discovered": 4, 00:17:35.538 "num_base_bdevs_operational": 4, 00:17:35.538 "process": { 00:17:35.538 "type": "rebuild", 00:17:35.538 "target": "spare", 00:17:35.538 "progress": { 00:17:35.538 "blocks": 130560, 00:17:35.538 "percent": 68 00:17:35.538 } 00:17:35.538 }, 00:17:35.538 "base_bdevs_list": [ 00:17:35.538 { 00:17:35.538 "name": "spare", 00:17:35.538 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:35.538 "is_configured": true, 00:17:35.538 "data_offset": 2048, 00:17:35.538 "data_size": 63488 00:17:35.538 }, 00:17:35.538 { 00:17:35.538 "name": "BaseBdev2", 00:17:35.538 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:35.538 "is_configured": true, 00:17:35.538 "data_offset": 2048, 00:17:35.538 "data_size": 63488 00:17:35.538 }, 00:17:35.538 { 00:17:35.538 "name": "BaseBdev3", 00:17:35.538 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:35.538 "is_configured": true, 00:17:35.538 "data_offset": 2048, 00:17:35.538 "data_size": 63488 00:17:35.538 }, 00:17:35.538 { 00:17:35.538 "name": "BaseBdev4", 00:17:35.538 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:35.538 "is_configured": true, 00:17:35.538 "data_offset": 2048, 00:17:35.538 "data_size": 63488 00:17:35.538 } 00:17:35.538 ] 00:17:35.538 }' 00:17:35.538 18:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.538 18:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.538 18:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.538 18:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.538 18:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.473 "name": "raid_bdev1", 00:17:36.473 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:36.473 "strip_size_kb": 64, 00:17:36.473 "state": "online", 00:17:36.473 "raid_level": "raid5f", 00:17:36.473 "superblock": true, 00:17:36.473 "num_base_bdevs": 4, 00:17:36.473 "num_base_bdevs_discovered": 4, 00:17:36.473 "num_base_bdevs_operational": 4, 00:17:36.473 "process": { 00:17:36.473 "type": "rebuild", 00:17:36.473 "target": "spare", 00:17:36.473 "progress": { 00:17:36.473 "blocks": 153600, 00:17:36.473 "percent": 80 00:17:36.473 } 00:17:36.473 }, 00:17:36.473 "base_bdevs_list": [ 00:17:36.473 { 00:17:36.473 "name": "spare", 00:17:36.473 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:36.473 "is_configured": true, 00:17:36.473 "data_offset": 2048, 00:17:36.473 "data_size": 63488 00:17:36.473 }, 00:17:36.473 { 00:17:36.473 "name": "BaseBdev2", 00:17:36.473 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:36.473 "is_configured": true, 00:17:36.473 "data_offset": 2048, 00:17:36.473 "data_size": 63488 00:17:36.473 }, 00:17:36.473 { 00:17:36.473 "name": "BaseBdev3", 00:17:36.473 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:36.473 "is_configured": true, 00:17:36.473 "data_offset": 2048, 00:17:36.473 "data_size": 63488 00:17:36.473 }, 00:17:36.473 { 00:17:36.473 "name": "BaseBdev4", 00:17:36.473 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:36.473 "is_configured": true, 00:17:36.473 "data_offset": 2048, 00:17:36.473 "data_size": 63488 00:17:36.473 } 00:17:36.473 ] 00:17:36.473 }' 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.473 18:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.845 "name": "raid_bdev1", 00:17:37.845 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:37.845 "strip_size_kb": 64, 00:17:37.845 "state": "online", 00:17:37.845 "raid_level": "raid5f", 00:17:37.845 "superblock": true, 00:17:37.845 "num_base_bdevs": 4, 00:17:37.845 "num_base_bdevs_discovered": 4, 00:17:37.845 "num_base_bdevs_operational": 4, 00:17:37.845 "process": { 00:17:37.845 "type": "rebuild", 00:17:37.845 "target": "spare", 00:17:37.845 "progress": { 00:17:37.845 "blocks": 174720, 00:17:37.845 "percent": 91 00:17:37.845 } 00:17:37.845 }, 00:17:37.845 "base_bdevs_list": [ 00:17:37.845 { 00:17:37.845 "name": "spare", 00:17:37.845 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:37.845 "is_configured": true, 00:17:37.845 "data_offset": 2048, 00:17:37.845 "data_size": 63488 00:17:37.845 }, 00:17:37.845 { 00:17:37.845 "name": "BaseBdev2", 00:17:37.845 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:37.845 "is_configured": true, 00:17:37.845 "data_offset": 2048, 00:17:37.845 "data_size": 63488 00:17:37.845 }, 00:17:37.845 { 00:17:37.845 "name": "BaseBdev3", 00:17:37.845 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:37.845 "is_configured": true, 00:17:37.845 "data_offset": 2048, 00:17:37.845 "data_size": 63488 00:17:37.845 }, 00:17:37.845 { 00:17:37.845 "name": "BaseBdev4", 00:17:37.845 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:37.845 "is_configured": true, 00:17:37.845 "data_offset": 2048, 00:17:37.845 "data_size": 63488 00:17:37.845 } 00:17:37.845 ] 00:17:37.845 }' 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.845 18:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:38.411 [2024-11-26 18:02:20.151418] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:38.411 [2024-11-26 18:02:20.151537] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:38.411 [2024-11-26 18:02:20.151708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.669 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:38.669 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.669 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.669 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.669 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.669 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.669 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.669 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.669 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.669 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.669 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.669 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.669 "name": "raid_bdev1", 00:17:38.669 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:38.669 "strip_size_kb": 64, 00:17:38.669 "state": "online", 00:17:38.669 "raid_level": "raid5f", 00:17:38.669 "superblock": true, 00:17:38.669 "num_base_bdevs": 4, 00:17:38.669 "num_base_bdevs_discovered": 4, 00:17:38.669 "num_base_bdevs_operational": 4, 00:17:38.669 "base_bdevs_list": [ 00:17:38.669 { 00:17:38.669 "name": "spare", 00:17:38.669 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:38.669 "is_configured": true, 00:17:38.669 "data_offset": 2048, 00:17:38.669 "data_size": 63488 00:17:38.669 }, 00:17:38.669 { 00:17:38.669 "name": "BaseBdev2", 00:17:38.669 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:38.669 "is_configured": true, 00:17:38.669 "data_offset": 2048, 00:17:38.669 "data_size": 63488 00:17:38.669 }, 00:17:38.669 { 00:17:38.669 "name": "BaseBdev3", 00:17:38.669 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:38.669 "is_configured": true, 00:17:38.669 "data_offset": 2048, 00:17:38.669 "data_size": 63488 00:17:38.669 }, 00:17:38.669 { 00:17:38.669 "name": "BaseBdev4", 00:17:38.669 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:38.669 "is_configured": true, 00:17:38.669 "data_offset": 2048, 00:17:38.669 "data_size": 63488 00:17:38.669 } 00:17:38.669 ] 00:17:38.669 }' 00:17:38.669 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.669 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:38.669 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.927 "name": "raid_bdev1", 00:17:38.927 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:38.927 "strip_size_kb": 64, 00:17:38.927 "state": "online", 00:17:38.927 "raid_level": "raid5f", 00:17:38.927 "superblock": true, 00:17:38.927 "num_base_bdevs": 4, 00:17:38.927 "num_base_bdevs_discovered": 4, 00:17:38.927 "num_base_bdevs_operational": 4, 00:17:38.927 "base_bdevs_list": [ 00:17:38.927 { 00:17:38.927 "name": "spare", 00:17:38.927 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:38.927 "is_configured": true, 00:17:38.927 "data_offset": 2048, 00:17:38.927 "data_size": 63488 00:17:38.927 }, 00:17:38.927 { 00:17:38.927 "name": "BaseBdev2", 00:17:38.927 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:38.927 "is_configured": true, 00:17:38.927 "data_offset": 2048, 00:17:38.927 "data_size": 63488 00:17:38.927 }, 00:17:38.927 { 00:17:38.927 "name": "BaseBdev3", 00:17:38.927 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:38.927 "is_configured": true, 00:17:38.927 "data_offset": 2048, 00:17:38.927 "data_size": 63488 00:17:38.927 }, 00:17:38.927 { 00:17:38.927 "name": "BaseBdev4", 00:17:38.927 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:38.927 "is_configured": true, 00:17:38.927 "data_offset": 2048, 00:17:38.927 "data_size": 63488 00:17:38.927 } 00:17:38.927 ] 00:17:38.927 }' 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.927 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.928 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.928 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.928 "name": "raid_bdev1", 00:17:38.928 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:38.928 "strip_size_kb": 64, 00:17:38.928 "state": "online", 00:17:38.928 "raid_level": "raid5f", 00:17:38.928 "superblock": true, 00:17:38.928 "num_base_bdevs": 4, 00:17:38.928 "num_base_bdevs_discovered": 4, 00:17:38.928 "num_base_bdevs_operational": 4, 00:17:38.928 "base_bdevs_list": [ 00:17:38.928 { 00:17:38.928 "name": "spare", 00:17:38.928 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:38.928 "is_configured": true, 00:17:38.928 "data_offset": 2048, 00:17:38.928 "data_size": 63488 00:17:38.928 }, 00:17:38.928 { 00:17:38.928 "name": "BaseBdev2", 00:17:38.928 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:38.928 "is_configured": true, 00:17:38.928 "data_offset": 2048, 00:17:38.928 "data_size": 63488 00:17:38.928 }, 00:17:38.928 { 00:17:38.928 "name": "BaseBdev3", 00:17:38.928 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:38.928 "is_configured": true, 00:17:38.928 "data_offset": 2048, 00:17:38.928 "data_size": 63488 00:17:38.928 }, 00:17:38.928 { 00:17:38.928 "name": "BaseBdev4", 00:17:38.928 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:38.928 "is_configured": true, 00:17:38.928 "data_offset": 2048, 00:17:38.928 "data_size": 63488 00:17:38.928 } 00:17:38.928 ] 00:17:38.928 }' 00:17:38.928 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.928 18:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.493 [2024-11-26 18:02:21.126261] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.493 [2024-11-26 18:02:21.126363] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.493 [2024-11-26 18:02:21.126508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.493 [2024-11-26 18:02:21.126668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.493 [2024-11-26 18:02:21.126753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:39.493 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:39.751 /dev/nbd0 00:17:39.751 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:39.751 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:39.751 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:39.751 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:39.751 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:39.751 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:39.751 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:39.751 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:39.751 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:39.751 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:39.751 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:39.751 1+0 records in 00:17:39.751 1+0 records out 00:17:39.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330477 s, 12.4 MB/s 00:17:39.751 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.752 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:39.752 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.752 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:39.752 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:39.752 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:39.752 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:39.752 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:40.009 /dev/nbd1 00:17:40.009 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:40.009 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:40.009 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:40.009 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:40.010 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:40.010 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:40.010 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:40.010 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:40.010 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:40.010 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:40.010 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:40.010 1+0 records in 00:17:40.010 1+0 records out 00:17:40.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359451 s, 11.4 MB/s 00:17:40.010 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.010 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:40.010 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.010 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:40.010 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:40.010 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:40.010 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:40.010 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:40.269 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:40.269 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:40.269 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:40.269 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:40.269 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:40.269 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:40.269 18:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:40.528 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:40.528 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:40.528 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:40.528 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:40.528 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:40.528 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:40.528 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:40.528 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:40.528 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:40.528 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.788 [2024-11-26 18:02:22.544154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:40.788 [2024-11-26 18:02:22.544234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.788 [2024-11-26 18:02:22.544267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:40.788 [2024-11-26 18:02:22.544280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.788 [2024-11-26 18:02:22.547373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.788 [2024-11-26 18:02:22.547427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:40.788 [2024-11-26 18:02:22.547566] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:40.788 [2024-11-26 18:02:22.547645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.788 [2024-11-26 18:02:22.547864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:40.788 [2024-11-26 18:02:22.547993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:40.788 [2024-11-26 18:02:22.548140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:40.788 spare 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.788 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.788 [2024-11-26 18:02:22.648134] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:40.788 [2024-11-26 18:02:22.648208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:40.788 [2024-11-26 18:02:22.648626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:41.048 [2024-11-26 18:02:22.658821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:41.048 [2024-11-26 18:02:22.658853] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:41.048 [2024-11-26 18:02:22.659157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.048 "name": "raid_bdev1", 00:17:41.048 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:41.048 "strip_size_kb": 64, 00:17:41.048 "state": "online", 00:17:41.048 "raid_level": "raid5f", 00:17:41.048 "superblock": true, 00:17:41.048 "num_base_bdevs": 4, 00:17:41.048 "num_base_bdevs_discovered": 4, 00:17:41.048 "num_base_bdevs_operational": 4, 00:17:41.048 "base_bdevs_list": [ 00:17:41.048 { 00:17:41.048 "name": "spare", 00:17:41.048 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:41.048 "is_configured": true, 00:17:41.048 "data_offset": 2048, 00:17:41.048 "data_size": 63488 00:17:41.048 }, 00:17:41.048 { 00:17:41.048 "name": "BaseBdev2", 00:17:41.048 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:41.048 "is_configured": true, 00:17:41.048 "data_offset": 2048, 00:17:41.048 "data_size": 63488 00:17:41.048 }, 00:17:41.048 { 00:17:41.048 "name": "BaseBdev3", 00:17:41.048 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:41.048 "is_configured": true, 00:17:41.048 "data_offset": 2048, 00:17:41.048 "data_size": 63488 00:17:41.048 }, 00:17:41.048 { 00:17:41.048 "name": "BaseBdev4", 00:17:41.048 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:41.048 "is_configured": true, 00:17:41.048 "data_offset": 2048, 00:17:41.048 "data_size": 63488 00:17:41.048 } 00:17:41.048 ] 00:17:41.048 }' 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.048 18:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.308 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:41.308 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.308 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:41.308 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:41.308 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.308 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.308 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.308 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.308 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.308 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.573 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.573 "name": "raid_bdev1", 00:17:41.573 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:41.573 "strip_size_kb": 64, 00:17:41.573 "state": "online", 00:17:41.573 "raid_level": "raid5f", 00:17:41.573 "superblock": true, 00:17:41.573 "num_base_bdevs": 4, 00:17:41.573 "num_base_bdevs_discovered": 4, 00:17:41.573 "num_base_bdevs_operational": 4, 00:17:41.573 "base_bdevs_list": [ 00:17:41.573 { 00:17:41.573 "name": "spare", 00:17:41.573 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:41.573 "is_configured": true, 00:17:41.573 "data_offset": 2048, 00:17:41.573 "data_size": 63488 00:17:41.573 }, 00:17:41.573 { 00:17:41.573 "name": "BaseBdev2", 00:17:41.573 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:41.573 "is_configured": true, 00:17:41.573 "data_offset": 2048, 00:17:41.573 "data_size": 63488 00:17:41.573 }, 00:17:41.573 { 00:17:41.573 "name": "BaseBdev3", 00:17:41.573 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:41.573 "is_configured": true, 00:17:41.573 "data_offset": 2048, 00:17:41.573 "data_size": 63488 00:17:41.573 }, 00:17:41.573 { 00:17:41.573 "name": "BaseBdev4", 00:17:41.573 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:41.573 "is_configured": true, 00:17:41.573 "data_offset": 2048, 00:17:41.573 "data_size": 63488 00:17:41.573 } 00:17:41.573 ] 00:17:41.573 }' 00:17:41.573 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.573 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:41.573 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.573 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.573 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.573 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:41.573 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.574 [2024-11-26 18:02:23.337476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.574 "name": "raid_bdev1", 00:17:41.574 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:41.574 "strip_size_kb": 64, 00:17:41.574 "state": "online", 00:17:41.574 "raid_level": "raid5f", 00:17:41.574 "superblock": true, 00:17:41.574 "num_base_bdevs": 4, 00:17:41.574 "num_base_bdevs_discovered": 3, 00:17:41.574 "num_base_bdevs_operational": 3, 00:17:41.574 "base_bdevs_list": [ 00:17:41.574 { 00:17:41.574 "name": null, 00:17:41.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.574 "is_configured": false, 00:17:41.574 "data_offset": 0, 00:17:41.574 "data_size": 63488 00:17:41.574 }, 00:17:41.574 { 00:17:41.574 "name": "BaseBdev2", 00:17:41.574 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:41.574 "is_configured": true, 00:17:41.574 "data_offset": 2048, 00:17:41.574 "data_size": 63488 00:17:41.574 }, 00:17:41.574 { 00:17:41.574 "name": "BaseBdev3", 00:17:41.574 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:41.574 "is_configured": true, 00:17:41.574 "data_offset": 2048, 00:17:41.574 "data_size": 63488 00:17:41.574 }, 00:17:41.574 { 00:17:41.574 "name": "BaseBdev4", 00:17:41.574 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:41.574 "is_configured": true, 00:17:41.574 "data_offset": 2048, 00:17:41.574 "data_size": 63488 00:17:41.574 } 00:17:41.574 ] 00:17:41.574 }' 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.574 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.144 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:42.144 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.144 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.144 [2024-11-26 18:02:23.728891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.144 [2024-11-26 18:02:23.729206] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:42.144 [2024-11-26 18:02:23.729240] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:42.144 [2024-11-26 18:02:23.729288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.144 [2024-11-26 18:02:23.746943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:42.144 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.144 18:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:42.144 [2024-11-26 18:02:23.757817] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.096 "name": "raid_bdev1", 00:17:43.096 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:43.096 "strip_size_kb": 64, 00:17:43.096 "state": "online", 00:17:43.096 "raid_level": "raid5f", 00:17:43.096 "superblock": true, 00:17:43.096 "num_base_bdevs": 4, 00:17:43.096 "num_base_bdevs_discovered": 4, 00:17:43.096 "num_base_bdevs_operational": 4, 00:17:43.096 "process": { 00:17:43.096 "type": "rebuild", 00:17:43.096 "target": "spare", 00:17:43.096 "progress": { 00:17:43.096 "blocks": 17280, 00:17:43.096 "percent": 9 00:17:43.096 } 00:17:43.096 }, 00:17:43.096 "base_bdevs_list": [ 00:17:43.096 { 00:17:43.096 "name": "spare", 00:17:43.096 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:43.096 "is_configured": true, 00:17:43.096 "data_offset": 2048, 00:17:43.096 "data_size": 63488 00:17:43.096 }, 00:17:43.096 { 00:17:43.096 "name": "BaseBdev2", 00:17:43.096 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:43.096 "is_configured": true, 00:17:43.096 "data_offset": 2048, 00:17:43.096 "data_size": 63488 00:17:43.096 }, 00:17:43.096 { 00:17:43.096 "name": "BaseBdev3", 00:17:43.096 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:43.096 "is_configured": true, 00:17:43.096 "data_offset": 2048, 00:17:43.096 "data_size": 63488 00:17:43.096 }, 00:17:43.096 { 00:17:43.096 "name": "BaseBdev4", 00:17:43.096 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:43.096 "is_configured": true, 00:17:43.096 "data_offset": 2048, 00:17:43.096 "data_size": 63488 00:17:43.096 } 00:17:43.096 ] 00:17:43.096 }' 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.096 18:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.096 [2024-11-26 18:02:24.913865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.356 [2024-11-26 18:02:24.967971] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:43.356 [2024-11-26 18:02:24.968168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.356 [2024-11-26 18:02:24.968220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.356 [2024-11-26 18:02:24.968262] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.356 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.356 "name": "raid_bdev1", 00:17:43.356 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:43.356 "strip_size_kb": 64, 00:17:43.356 "state": "online", 00:17:43.357 "raid_level": "raid5f", 00:17:43.357 "superblock": true, 00:17:43.357 "num_base_bdevs": 4, 00:17:43.357 "num_base_bdevs_discovered": 3, 00:17:43.357 "num_base_bdevs_operational": 3, 00:17:43.357 "base_bdevs_list": [ 00:17:43.357 { 00:17:43.357 "name": null, 00:17:43.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.357 "is_configured": false, 00:17:43.357 "data_offset": 0, 00:17:43.357 "data_size": 63488 00:17:43.357 }, 00:17:43.357 { 00:17:43.357 "name": "BaseBdev2", 00:17:43.357 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:43.357 "is_configured": true, 00:17:43.357 "data_offset": 2048, 00:17:43.357 "data_size": 63488 00:17:43.357 }, 00:17:43.357 { 00:17:43.357 "name": "BaseBdev3", 00:17:43.357 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:43.357 "is_configured": true, 00:17:43.357 "data_offset": 2048, 00:17:43.357 "data_size": 63488 00:17:43.357 }, 00:17:43.357 { 00:17:43.357 "name": "BaseBdev4", 00:17:43.357 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:43.357 "is_configured": true, 00:17:43.357 "data_offset": 2048, 00:17:43.357 "data_size": 63488 00:17:43.357 } 00:17:43.357 ] 00:17:43.357 }' 00:17:43.357 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.357 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.617 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:43.617 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.617 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.617 [2024-11-26 18:02:25.466180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:43.617 [2024-11-26 18:02:25.466332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.617 [2024-11-26 18:02:25.466391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:43.617 [2024-11-26 18:02:25.466474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.617 [2024-11-26 18:02:25.467147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.617 [2024-11-26 18:02:25.467230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:43.617 [2024-11-26 18:02:25.467387] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:43.617 [2024-11-26 18:02:25.467444] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:43.617 [2024-11-26 18:02:25.467502] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:43.617 [2024-11-26 18:02:25.467560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:43.878 [2024-11-26 18:02:25.486037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:43.878 spare 00:17:43.878 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.878 18:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:43.878 [2024-11-26 18:02:25.498613] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.818 "name": "raid_bdev1", 00:17:44.818 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:44.818 "strip_size_kb": 64, 00:17:44.818 "state": "online", 00:17:44.818 "raid_level": "raid5f", 00:17:44.818 "superblock": true, 00:17:44.818 "num_base_bdevs": 4, 00:17:44.818 "num_base_bdevs_discovered": 4, 00:17:44.818 "num_base_bdevs_operational": 4, 00:17:44.818 "process": { 00:17:44.818 "type": "rebuild", 00:17:44.818 "target": "spare", 00:17:44.818 "progress": { 00:17:44.818 "blocks": 17280, 00:17:44.818 "percent": 9 00:17:44.818 } 00:17:44.818 }, 00:17:44.818 "base_bdevs_list": [ 00:17:44.818 { 00:17:44.818 "name": "spare", 00:17:44.818 "uuid": "3506d7dd-7f66-5933-8be5-4524a2039923", 00:17:44.818 "is_configured": true, 00:17:44.818 "data_offset": 2048, 00:17:44.818 "data_size": 63488 00:17:44.818 }, 00:17:44.818 { 00:17:44.818 "name": "BaseBdev2", 00:17:44.818 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:44.818 "is_configured": true, 00:17:44.818 "data_offset": 2048, 00:17:44.818 "data_size": 63488 00:17:44.818 }, 00:17:44.818 { 00:17:44.818 "name": "BaseBdev3", 00:17:44.818 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:44.818 "is_configured": true, 00:17:44.818 "data_offset": 2048, 00:17:44.818 "data_size": 63488 00:17:44.818 }, 00:17:44.818 { 00:17:44.818 "name": "BaseBdev4", 00:17:44.818 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:44.818 "is_configured": true, 00:17:44.818 "data_offset": 2048, 00:17:44.818 "data_size": 63488 00:17:44.818 } 00:17:44.818 ] 00:17:44.818 }' 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.818 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.818 [2024-11-26 18:02:26.646341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.077 [2024-11-26 18:02:26.708564] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:45.077 [2024-11-26 18:02:26.708649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.077 [2024-11-26 18:02:26.708674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.077 [2024-11-26 18:02:26.708683] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.077 "name": "raid_bdev1", 00:17:45.077 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:45.077 "strip_size_kb": 64, 00:17:45.077 "state": "online", 00:17:45.077 "raid_level": "raid5f", 00:17:45.077 "superblock": true, 00:17:45.077 "num_base_bdevs": 4, 00:17:45.077 "num_base_bdevs_discovered": 3, 00:17:45.077 "num_base_bdevs_operational": 3, 00:17:45.077 "base_bdevs_list": [ 00:17:45.077 { 00:17:45.077 "name": null, 00:17:45.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.077 "is_configured": false, 00:17:45.077 "data_offset": 0, 00:17:45.077 "data_size": 63488 00:17:45.077 }, 00:17:45.077 { 00:17:45.077 "name": "BaseBdev2", 00:17:45.077 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:45.077 "is_configured": true, 00:17:45.077 "data_offset": 2048, 00:17:45.077 "data_size": 63488 00:17:45.077 }, 00:17:45.077 { 00:17:45.077 "name": "BaseBdev3", 00:17:45.077 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:45.077 "is_configured": true, 00:17:45.077 "data_offset": 2048, 00:17:45.077 "data_size": 63488 00:17:45.077 }, 00:17:45.077 { 00:17:45.077 "name": "BaseBdev4", 00:17:45.077 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:45.077 "is_configured": true, 00:17:45.077 "data_offset": 2048, 00:17:45.077 "data_size": 63488 00:17:45.077 } 00:17:45.077 ] 00:17:45.077 }' 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.077 18:02:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.337 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:45.337 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.337 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:45.337 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:45.337 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.337 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.337 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.337 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.337 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.596 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.596 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.596 "name": "raid_bdev1", 00:17:45.596 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:45.596 "strip_size_kb": 64, 00:17:45.596 "state": "online", 00:17:45.596 "raid_level": "raid5f", 00:17:45.596 "superblock": true, 00:17:45.596 "num_base_bdevs": 4, 00:17:45.596 "num_base_bdevs_discovered": 3, 00:17:45.596 "num_base_bdevs_operational": 3, 00:17:45.596 "base_bdevs_list": [ 00:17:45.596 { 00:17:45.596 "name": null, 00:17:45.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.596 "is_configured": false, 00:17:45.596 "data_offset": 0, 00:17:45.596 "data_size": 63488 00:17:45.596 }, 00:17:45.596 { 00:17:45.596 "name": "BaseBdev2", 00:17:45.596 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:45.596 "is_configured": true, 00:17:45.596 "data_offset": 2048, 00:17:45.596 "data_size": 63488 00:17:45.596 }, 00:17:45.596 { 00:17:45.596 "name": "BaseBdev3", 00:17:45.596 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:45.596 "is_configured": true, 00:17:45.596 "data_offset": 2048, 00:17:45.596 "data_size": 63488 00:17:45.596 }, 00:17:45.596 { 00:17:45.596 "name": "BaseBdev4", 00:17:45.596 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:45.596 "is_configured": true, 00:17:45.596 "data_offset": 2048, 00:17:45.596 "data_size": 63488 00:17:45.596 } 00:17:45.596 ] 00:17:45.596 }' 00:17:45.596 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.596 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:45.596 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.596 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:45.596 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:45.596 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.596 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.596 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.596 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:45.596 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.596 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.596 [2024-11-26 18:02:27.321440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:45.596 [2024-11-26 18:02:27.321627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.596 [2024-11-26 18:02:27.321668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:45.596 [2024-11-26 18:02:27.321680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.596 [2024-11-26 18:02:27.322326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.596 [2024-11-26 18:02:27.322367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:45.596 [2024-11-26 18:02:27.322476] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:45.596 [2024-11-26 18:02:27.322494] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:45.596 [2024-11-26 18:02:27.322510] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:45.596 [2024-11-26 18:02:27.322523] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:45.596 BaseBdev1 00:17:45.596 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.596 18:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.536 "name": "raid_bdev1", 00:17:46.536 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:46.536 "strip_size_kb": 64, 00:17:46.536 "state": "online", 00:17:46.536 "raid_level": "raid5f", 00:17:46.536 "superblock": true, 00:17:46.536 "num_base_bdevs": 4, 00:17:46.536 "num_base_bdevs_discovered": 3, 00:17:46.536 "num_base_bdevs_operational": 3, 00:17:46.536 "base_bdevs_list": [ 00:17:46.536 { 00:17:46.536 "name": null, 00:17:46.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.536 "is_configured": false, 00:17:46.536 "data_offset": 0, 00:17:46.536 "data_size": 63488 00:17:46.536 }, 00:17:46.536 { 00:17:46.536 "name": "BaseBdev2", 00:17:46.536 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:46.536 "is_configured": true, 00:17:46.536 "data_offset": 2048, 00:17:46.536 "data_size": 63488 00:17:46.536 }, 00:17:46.536 { 00:17:46.536 "name": "BaseBdev3", 00:17:46.536 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:46.536 "is_configured": true, 00:17:46.536 "data_offset": 2048, 00:17:46.536 "data_size": 63488 00:17:46.536 }, 00:17:46.536 { 00:17:46.536 "name": "BaseBdev4", 00:17:46.536 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:46.536 "is_configured": true, 00:17:46.536 "data_offset": 2048, 00:17:46.536 "data_size": 63488 00:17:46.536 } 00:17:46.536 ] 00:17:46.536 }' 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.536 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.104 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.104 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.104 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.104 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.104 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.104 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.104 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.104 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.104 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.104 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.105 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.105 "name": "raid_bdev1", 00:17:47.105 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:47.105 "strip_size_kb": 64, 00:17:47.105 "state": "online", 00:17:47.105 "raid_level": "raid5f", 00:17:47.105 "superblock": true, 00:17:47.105 "num_base_bdevs": 4, 00:17:47.105 "num_base_bdevs_discovered": 3, 00:17:47.105 "num_base_bdevs_operational": 3, 00:17:47.105 "base_bdevs_list": [ 00:17:47.105 { 00:17:47.105 "name": null, 00:17:47.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.105 "is_configured": false, 00:17:47.105 "data_offset": 0, 00:17:47.105 "data_size": 63488 00:17:47.105 }, 00:17:47.105 { 00:17:47.105 "name": "BaseBdev2", 00:17:47.105 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:47.105 "is_configured": true, 00:17:47.105 "data_offset": 2048, 00:17:47.105 "data_size": 63488 00:17:47.105 }, 00:17:47.105 { 00:17:47.105 "name": "BaseBdev3", 00:17:47.105 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:47.105 "is_configured": true, 00:17:47.105 "data_offset": 2048, 00:17:47.105 "data_size": 63488 00:17:47.105 }, 00:17:47.105 { 00:17:47.105 "name": "BaseBdev4", 00:17:47.105 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:47.105 "is_configured": true, 00:17:47.105 "data_offset": 2048, 00:17:47.105 "data_size": 63488 00:17:47.105 } 00:17:47.105 ] 00:17:47.105 }' 00:17:47.105 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.105 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.105 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.105 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:47.105 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:47.105 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:47.105 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:47.105 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:47.105 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.105 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:47.364 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.364 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:47.364 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.364 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.364 [2024-11-26 18:02:28.970870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.364 [2024-11-26 18:02:28.971093] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:47.364 [2024-11-26 18:02:28.971181] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:47.364 request: 00:17:47.364 { 00:17:47.364 "base_bdev": "BaseBdev1", 00:17:47.364 "raid_bdev": "raid_bdev1", 00:17:47.364 "method": "bdev_raid_add_base_bdev", 00:17:47.364 "req_id": 1 00:17:47.364 } 00:17:47.364 Got JSON-RPC error response 00:17:47.364 response: 00:17:47.364 { 00:17:47.364 "code": -22, 00:17:47.364 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:47.364 } 00:17:47.364 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:47.364 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:47.364 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:47.364 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:47.364 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:47.364 18:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:48.300 18:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:48.300 18:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.300 18:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.300 18:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.300 18:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.300 18:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:48.300 18:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.300 18:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.300 18:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.300 18:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.300 18:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.300 18:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.300 18:02:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.300 18:02:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.300 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.300 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.300 "name": "raid_bdev1", 00:17:48.300 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:48.300 "strip_size_kb": 64, 00:17:48.300 "state": "online", 00:17:48.300 "raid_level": "raid5f", 00:17:48.300 "superblock": true, 00:17:48.300 "num_base_bdevs": 4, 00:17:48.300 "num_base_bdevs_discovered": 3, 00:17:48.300 "num_base_bdevs_operational": 3, 00:17:48.300 "base_bdevs_list": [ 00:17:48.300 { 00:17:48.301 "name": null, 00:17:48.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.301 "is_configured": false, 00:17:48.301 "data_offset": 0, 00:17:48.301 "data_size": 63488 00:17:48.301 }, 00:17:48.301 { 00:17:48.301 "name": "BaseBdev2", 00:17:48.301 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:48.301 "is_configured": true, 00:17:48.301 "data_offset": 2048, 00:17:48.301 "data_size": 63488 00:17:48.301 }, 00:17:48.301 { 00:17:48.301 "name": "BaseBdev3", 00:17:48.301 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:48.301 "is_configured": true, 00:17:48.301 "data_offset": 2048, 00:17:48.301 "data_size": 63488 00:17:48.301 }, 00:17:48.301 { 00:17:48.301 "name": "BaseBdev4", 00:17:48.301 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:48.301 "is_configured": true, 00:17:48.301 "data_offset": 2048, 00:17:48.301 "data_size": 63488 00:17:48.301 } 00:17:48.301 ] 00:17:48.301 }' 00:17:48.301 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.301 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.560 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.560 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.560 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:48.560 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:48.560 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.818 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.818 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.818 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.818 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.818 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.818 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.818 "name": "raid_bdev1", 00:17:48.818 "uuid": "ab7780b2-a6f9-4a7a-97fa-019e4f201f1a", 00:17:48.818 "strip_size_kb": 64, 00:17:48.818 "state": "online", 00:17:48.818 "raid_level": "raid5f", 00:17:48.818 "superblock": true, 00:17:48.818 "num_base_bdevs": 4, 00:17:48.818 "num_base_bdevs_discovered": 3, 00:17:48.818 "num_base_bdevs_operational": 3, 00:17:48.818 "base_bdevs_list": [ 00:17:48.818 { 00:17:48.818 "name": null, 00:17:48.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.818 "is_configured": false, 00:17:48.818 "data_offset": 0, 00:17:48.818 "data_size": 63488 00:17:48.818 }, 00:17:48.818 { 00:17:48.818 "name": "BaseBdev2", 00:17:48.818 "uuid": "2d381e0f-59f4-5b8e-a58d-98ab5f59c65e", 00:17:48.818 "is_configured": true, 00:17:48.818 "data_offset": 2048, 00:17:48.818 "data_size": 63488 00:17:48.818 }, 00:17:48.818 { 00:17:48.818 "name": "BaseBdev3", 00:17:48.818 "uuid": "d3938309-14f9-53fc-ab88-c360d19162c5", 00:17:48.818 "is_configured": true, 00:17:48.818 "data_offset": 2048, 00:17:48.818 "data_size": 63488 00:17:48.818 }, 00:17:48.818 { 00:17:48.818 "name": "BaseBdev4", 00:17:48.818 "uuid": "662ecdfb-a64a-5a94-b81e-150504c25d87", 00:17:48.819 "is_configured": true, 00:17:48.819 "data_offset": 2048, 00:17:48.819 "data_size": 63488 00:17:48.819 } 00:17:48.819 ] 00:17:48.819 }' 00:17:48.819 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.819 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.819 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.819 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.819 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85545 00:17:48.819 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85545 ']' 00:17:48.819 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85545 00:17:48.819 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:48.819 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.819 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85545 00:17:48.819 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:48.819 killing process with pid 85545 00:17:48.819 Received shutdown signal, test time was about 60.000000 seconds 00:17:48.819 00:17:48.819 Latency(us) 00:17:48.819 [2024-11-26T18:02:30.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.819 [2024-11-26T18:02:30.682Z] =================================================================================================================== 00:17:48.819 [2024-11-26T18:02:30.682Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:48.819 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:48.819 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85545' 00:17:48.819 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85545 00:17:48.819 [2024-11-26 18:02:30.602647] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:48.819 18:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85545 00:17:48.819 [2024-11-26 18:02:30.602802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.819 [2024-11-26 18:02:30.602904] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.819 [2024-11-26 18:02:30.602946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:49.388 [2024-11-26 18:02:31.138875] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:50.769 ************************************ 00:17:50.769 END TEST raid5f_rebuild_test_sb 00:17:50.769 ************************************ 00:17:50.769 18:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:50.769 00:17:50.769 real 0m27.561s 00:17:50.769 user 0m34.783s 00:17:50.769 sys 0m2.965s 00:17:50.769 18:02:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.769 18:02:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.769 18:02:32 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:50.769 18:02:32 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:50.769 18:02:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:50.769 18:02:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.769 18:02:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:50.769 ************************************ 00:17:50.769 START TEST raid_state_function_test_sb_4k 00:17:50.769 ************************************ 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86360 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86360' 00:17:50.769 Process raid pid: 86360 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86360 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86360 ']' 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.769 18:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.769 [2024-11-26 18:02:32.474374] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:17:50.769 [2024-11-26 18:02:32.475034] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.029 [2024-11-26 18:02:32.651836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.029 [2024-11-26 18:02:32.774693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.288 [2024-11-26 18:02:32.982357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.288 [2024-11-26 18:02:32.982405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.548 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.549 [2024-11-26 18:02:33.324764] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:51.549 [2024-11-26 18:02:33.324822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:51.549 [2024-11-26 18:02:33.324833] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:51.549 [2024-11-26 18:02:33.324843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.549 "name": "Existed_Raid", 00:17:51.549 "uuid": "3c80583f-9aca-4c4b-82e7-a3d6013924db", 00:17:51.549 "strip_size_kb": 0, 00:17:51.549 "state": "configuring", 00:17:51.549 "raid_level": "raid1", 00:17:51.549 "superblock": true, 00:17:51.549 "num_base_bdevs": 2, 00:17:51.549 "num_base_bdevs_discovered": 0, 00:17:51.549 "num_base_bdevs_operational": 2, 00:17:51.549 "base_bdevs_list": [ 00:17:51.549 { 00:17:51.549 "name": "BaseBdev1", 00:17:51.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.549 "is_configured": false, 00:17:51.549 "data_offset": 0, 00:17:51.549 "data_size": 0 00:17:51.549 }, 00:17:51.549 { 00:17:51.549 "name": "BaseBdev2", 00:17:51.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.549 "is_configured": false, 00:17:51.549 "data_offset": 0, 00:17:51.549 "data_size": 0 00:17:51.549 } 00:17:51.549 ] 00:17:51.549 }' 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.549 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.119 [2024-11-26 18:02:33.767942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:52.119 [2024-11-26 18:02:33.768071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.119 [2024-11-26 18:02:33.779914] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:52.119 [2024-11-26 18:02:33.780000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:52.119 [2024-11-26 18:02:33.780041] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.119 [2024-11-26 18:02:33.780084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.119 [2024-11-26 18:02:33.830486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.119 BaseBdev1 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.119 [ 00:17:52.119 { 00:17:52.119 "name": "BaseBdev1", 00:17:52.119 "aliases": [ 00:17:52.119 "6a4d1176-e693-4122-a418-08c663351b73" 00:17:52.119 ], 00:17:52.119 "product_name": "Malloc disk", 00:17:52.119 "block_size": 4096, 00:17:52.119 "num_blocks": 8192, 00:17:52.119 "uuid": "6a4d1176-e693-4122-a418-08c663351b73", 00:17:52.119 "assigned_rate_limits": { 00:17:52.119 "rw_ios_per_sec": 0, 00:17:52.119 "rw_mbytes_per_sec": 0, 00:17:52.119 "r_mbytes_per_sec": 0, 00:17:52.119 "w_mbytes_per_sec": 0 00:17:52.119 }, 00:17:52.119 "claimed": true, 00:17:52.119 "claim_type": "exclusive_write", 00:17:52.119 "zoned": false, 00:17:52.119 "supported_io_types": { 00:17:52.119 "read": true, 00:17:52.119 "write": true, 00:17:52.119 "unmap": true, 00:17:52.119 "flush": true, 00:17:52.119 "reset": true, 00:17:52.119 "nvme_admin": false, 00:17:52.119 "nvme_io": false, 00:17:52.119 "nvme_io_md": false, 00:17:52.119 "write_zeroes": true, 00:17:52.119 "zcopy": true, 00:17:52.119 "get_zone_info": false, 00:17:52.119 "zone_management": false, 00:17:52.119 "zone_append": false, 00:17:52.119 "compare": false, 00:17:52.119 "compare_and_write": false, 00:17:52.119 "abort": true, 00:17:52.119 "seek_hole": false, 00:17:52.119 "seek_data": false, 00:17:52.119 "copy": true, 00:17:52.119 "nvme_iov_md": false 00:17:52.119 }, 00:17:52.119 "memory_domains": [ 00:17:52.119 { 00:17:52.119 "dma_device_id": "system", 00:17:52.119 "dma_device_type": 1 00:17:52.119 }, 00:17:52.119 { 00:17:52.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.119 "dma_device_type": 2 00:17:52.119 } 00:17:52.119 ], 00:17:52.119 "driver_specific": {} 00:17:52.119 } 00:17:52.119 ] 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.119 "name": "Existed_Raid", 00:17:52.119 "uuid": "18a6fc4a-4169-433b-8b02-9cb8ea3a117c", 00:17:52.119 "strip_size_kb": 0, 00:17:52.119 "state": "configuring", 00:17:52.119 "raid_level": "raid1", 00:17:52.119 "superblock": true, 00:17:52.119 "num_base_bdevs": 2, 00:17:52.119 "num_base_bdevs_discovered": 1, 00:17:52.119 "num_base_bdevs_operational": 2, 00:17:52.119 "base_bdevs_list": [ 00:17:52.119 { 00:17:52.119 "name": "BaseBdev1", 00:17:52.119 "uuid": "6a4d1176-e693-4122-a418-08c663351b73", 00:17:52.119 "is_configured": true, 00:17:52.119 "data_offset": 256, 00:17:52.119 "data_size": 7936 00:17:52.119 }, 00:17:52.119 { 00:17:52.119 "name": "BaseBdev2", 00:17:52.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.119 "is_configured": false, 00:17:52.119 "data_offset": 0, 00:17:52.119 "data_size": 0 00:17:52.119 } 00:17:52.119 ] 00:17:52.119 }' 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.119 18:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.689 [2024-11-26 18:02:34.317728] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:52.689 [2024-11-26 18:02:34.317859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.689 [2024-11-26 18:02:34.329764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.689 [2024-11-26 18:02:34.331858] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.689 [2024-11-26 18:02:34.331940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.689 "name": "Existed_Raid", 00:17:52.689 "uuid": "4aab4f51-ca4b-46b4-9fd3-b2b7562c8ad6", 00:17:52.689 "strip_size_kb": 0, 00:17:52.689 "state": "configuring", 00:17:52.689 "raid_level": "raid1", 00:17:52.689 "superblock": true, 00:17:52.689 "num_base_bdevs": 2, 00:17:52.689 "num_base_bdevs_discovered": 1, 00:17:52.689 "num_base_bdevs_operational": 2, 00:17:52.689 "base_bdevs_list": [ 00:17:52.689 { 00:17:52.689 "name": "BaseBdev1", 00:17:52.689 "uuid": "6a4d1176-e693-4122-a418-08c663351b73", 00:17:52.689 "is_configured": true, 00:17:52.689 "data_offset": 256, 00:17:52.689 "data_size": 7936 00:17:52.689 }, 00:17:52.689 { 00:17:52.689 "name": "BaseBdev2", 00:17:52.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.689 "is_configured": false, 00:17:52.689 "data_offset": 0, 00:17:52.689 "data_size": 0 00:17:52.689 } 00:17:52.689 ] 00:17:52.689 }' 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.689 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.949 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:52.949 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.949 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.949 [2024-11-26 18:02:34.802485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:52.949 [2024-11-26 18:02:34.802776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:52.949 [2024-11-26 18:02:34.802792] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:52.949 [2024-11-26 18:02:34.803092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:52.949 BaseBdev2 00:17:52.949 [2024-11-26 18:02:34.803359] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:52.949 [2024-11-26 18:02:34.803384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:52.949 [2024-11-26 18:02:34.803548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.949 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.949 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:52.949 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:52.949 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:52.949 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:52.949 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:52.949 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:52.949 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:52.949 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.949 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.208 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.208 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:53.208 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.208 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.208 [ 00:17:53.208 { 00:17:53.208 "name": "BaseBdev2", 00:17:53.208 "aliases": [ 00:17:53.208 "ed763106-4693-4fc4-9abd-b0b32da0f146" 00:17:53.208 ], 00:17:53.208 "product_name": "Malloc disk", 00:17:53.208 "block_size": 4096, 00:17:53.208 "num_blocks": 8192, 00:17:53.208 "uuid": "ed763106-4693-4fc4-9abd-b0b32da0f146", 00:17:53.208 "assigned_rate_limits": { 00:17:53.208 "rw_ios_per_sec": 0, 00:17:53.208 "rw_mbytes_per_sec": 0, 00:17:53.208 "r_mbytes_per_sec": 0, 00:17:53.208 "w_mbytes_per_sec": 0 00:17:53.208 }, 00:17:53.208 "claimed": true, 00:17:53.208 "claim_type": "exclusive_write", 00:17:53.208 "zoned": false, 00:17:53.208 "supported_io_types": { 00:17:53.208 "read": true, 00:17:53.208 "write": true, 00:17:53.208 "unmap": true, 00:17:53.208 "flush": true, 00:17:53.208 "reset": true, 00:17:53.208 "nvme_admin": false, 00:17:53.208 "nvme_io": false, 00:17:53.208 "nvme_io_md": false, 00:17:53.208 "write_zeroes": true, 00:17:53.208 "zcopy": true, 00:17:53.208 "get_zone_info": false, 00:17:53.208 "zone_management": false, 00:17:53.208 "zone_append": false, 00:17:53.208 "compare": false, 00:17:53.208 "compare_and_write": false, 00:17:53.208 "abort": true, 00:17:53.208 "seek_hole": false, 00:17:53.208 "seek_data": false, 00:17:53.208 "copy": true, 00:17:53.208 "nvme_iov_md": false 00:17:53.208 }, 00:17:53.208 "memory_domains": [ 00:17:53.208 { 00:17:53.208 "dma_device_id": "system", 00:17:53.208 "dma_device_type": 1 00:17:53.208 }, 00:17:53.208 { 00:17:53.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.208 "dma_device_type": 2 00:17:53.208 } 00:17:53.208 ], 00:17:53.208 "driver_specific": {} 00:17:53.208 } 00:17:53.208 ] 00:17:53.208 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.208 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:53.208 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:53.208 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:53.208 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:53.208 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.209 "name": "Existed_Raid", 00:17:53.209 "uuid": "4aab4f51-ca4b-46b4-9fd3-b2b7562c8ad6", 00:17:53.209 "strip_size_kb": 0, 00:17:53.209 "state": "online", 00:17:53.209 "raid_level": "raid1", 00:17:53.209 "superblock": true, 00:17:53.209 "num_base_bdevs": 2, 00:17:53.209 "num_base_bdevs_discovered": 2, 00:17:53.209 "num_base_bdevs_operational": 2, 00:17:53.209 "base_bdevs_list": [ 00:17:53.209 { 00:17:53.209 "name": "BaseBdev1", 00:17:53.209 "uuid": "6a4d1176-e693-4122-a418-08c663351b73", 00:17:53.209 "is_configured": true, 00:17:53.209 "data_offset": 256, 00:17:53.209 "data_size": 7936 00:17:53.209 }, 00:17:53.209 { 00:17:53.209 "name": "BaseBdev2", 00:17:53.209 "uuid": "ed763106-4693-4fc4-9abd-b0b32da0f146", 00:17:53.209 "is_configured": true, 00:17:53.209 "data_offset": 256, 00:17:53.209 "data_size": 7936 00:17:53.209 } 00:17:53.209 ] 00:17:53.209 }' 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.209 18:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.468 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:53.468 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:53.468 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:53.468 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:53.468 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:53.468 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:53.468 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:53.468 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.468 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.468 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:53.468 [2024-11-26 18:02:35.314103] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.468 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:53.729 "name": "Existed_Raid", 00:17:53.729 "aliases": [ 00:17:53.729 "4aab4f51-ca4b-46b4-9fd3-b2b7562c8ad6" 00:17:53.729 ], 00:17:53.729 "product_name": "Raid Volume", 00:17:53.729 "block_size": 4096, 00:17:53.729 "num_blocks": 7936, 00:17:53.729 "uuid": "4aab4f51-ca4b-46b4-9fd3-b2b7562c8ad6", 00:17:53.729 "assigned_rate_limits": { 00:17:53.729 "rw_ios_per_sec": 0, 00:17:53.729 "rw_mbytes_per_sec": 0, 00:17:53.729 "r_mbytes_per_sec": 0, 00:17:53.729 "w_mbytes_per_sec": 0 00:17:53.729 }, 00:17:53.729 "claimed": false, 00:17:53.729 "zoned": false, 00:17:53.729 "supported_io_types": { 00:17:53.729 "read": true, 00:17:53.729 "write": true, 00:17:53.729 "unmap": false, 00:17:53.729 "flush": false, 00:17:53.729 "reset": true, 00:17:53.729 "nvme_admin": false, 00:17:53.729 "nvme_io": false, 00:17:53.729 "nvme_io_md": false, 00:17:53.729 "write_zeroes": true, 00:17:53.729 "zcopy": false, 00:17:53.729 "get_zone_info": false, 00:17:53.729 "zone_management": false, 00:17:53.729 "zone_append": false, 00:17:53.729 "compare": false, 00:17:53.729 "compare_and_write": false, 00:17:53.729 "abort": false, 00:17:53.729 "seek_hole": false, 00:17:53.729 "seek_data": false, 00:17:53.729 "copy": false, 00:17:53.729 "nvme_iov_md": false 00:17:53.729 }, 00:17:53.729 "memory_domains": [ 00:17:53.729 { 00:17:53.729 "dma_device_id": "system", 00:17:53.729 "dma_device_type": 1 00:17:53.729 }, 00:17:53.729 { 00:17:53.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.729 "dma_device_type": 2 00:17:53.729 }, 00:17:53.729 { 00:17:53.729 "dma_device_id": "system", 00:17:53.729 "dma_device_type": 1 00:17:53.729 }, 00:17:53.729 { 00:17:53.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.729 "dma_device_type": 2 00:17:53.729 } 00:17:53.729 ], 00:17:53.729 "driver_specific": { 00:17:53.729 "raid": { 00:17:53.729 "uuid": "4aab4f51-ca4b-46b4-9fd3-b2b7562c8ad6", 00:17:53.729 "strip_size_kb": 0, 00:17:53.729 "state": "online", 00:17:53.729 "raid_level": "raid1", 00:17:53.729 "superblock": true, 00:17:53.729 "num_base_bdevs": 2, 00:17:53.729 "num_base_bdevs_discovered": 2, 00:17:53.729 "num_base_bdevs_operational": 2, 00:17:53.729 "base_bdevs_list": [ 00:17:53.729 { 00:17:53.729 "name": "BaseBdev1", 00:17:53.729 "uuid": "6a4d1176-e693-4122-a418-08c663351b73", 00:17:53.729 "is_configured": true, 00:17:53.729 "data_offset": 256, 00:17:53.729 "data_size": 7936 00:17:53.729 }, 00:17:53.729 { 00:17:53.729 "name": "BaseBdev2", 00:17:53.729 "uuid": "ed763106-4693-4fc4-9abd-b0b32da0f146", 00:17:53.729 "is_configured": true, 00:17:53.729 "data_offset": 256, 00:17:53.729 "data_size": 7936 00:17:53.729 } 00:17:53.729 ] 00:17:53.729 } 00:17:53.729 } 00:17:53.729 }' 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:53.729 BaseBdev2' 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.729 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.990 [2024-11-26 18:02:35.589784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.990 "name": "Existed_Raid", 00:17:53.990 "uuid": "4aab4f51-ca4b-46b4-9fd3-b2b7562c8ad6", 00:17:53.990 "strip_size_kb": 0, 00:17:53.990 "state": "online", 00:17:53.990 "raid_level": "raid1", 00:17:53.990 "superblock": true, 00:17:53.990 "num_base_bdevs": 2, 00:17:53.990 "num_base_bdevs_discovered": 1, 00:17:53.990 "num_base_bdevs_operational": 1, 00:17:53.990 "base_bdevs_list": [ 00:17:53.990 { 00:17:53.990 "name": null, 00:17:53.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.990 "is_configured": false, 00:17:53.990 "data_offset": 0, 00:17:53.990 "data_size": 7936 00:17:53.990 }, 00:17:53.990 { 00:17:53.990 "name": "BaseBdev2", 00:17:53.990 "uuid": "ed763106-4693-4fc4-9abd-b0b32da0f146", 00:17:53.990 "is_configured": true, 00:17:53.990 "data_offset": 256, 00:17:53.990 "data_size": 7936 00:17:53.990 } 00:17:53.990 ] 00:17:53.990 }' 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.990 18:02:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.558 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:54.558 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:54.558 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:54.558 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.558 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.558 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.558 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.559 [2024-11-26 18:02:36.199237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:54.559 [2024-11-26 18:02:36.199357] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:54.559 [2024-11-26 18:02:36.305820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.559 [2024-11-26 18:02:36.305889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.559 [2024-11-26 18:02:36.305903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86360 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86360 ']' 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86360 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86360 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.559 killing process with pid 86360 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86360' 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86360 00:17:54.559 [2024-11-26 18:02:36.391451] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:54.559 18:02:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86360 00:17:54.559 [2024-11-26 18:02:36.410188] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.937 18:02:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:55.937 00:17:55.937 real 0m5.244s 00:17:55.937 user 0m7.521s 00:17:55.937 sys 0m0.850s 00:17:55.937 18:02:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.937 18:02:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.937 ************************************ 00:17:55.937 END TEST raid_state_function_test_sb_4k 00:17:55.937 ************************************ 00:17:55.937 18:02:37 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:55.937 18:02:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:55.937 18:02:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.937 18:02:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.937 ************************************ 00:17:55.937 START TEST raid_superblock_test_4k 00:17:55.937 ************************************ 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86611 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86611 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86611 ']' 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.937 18:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.937 [2024-11-26 18:02:37.784768] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:17:55.937 [2024-11-26 18:02:37.784974] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86611 ] 00:17:56.196 [2024-11-26 18:02:37.960691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.457 [2024-11-26 18:02:38.086109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.457 [2024-11-26 18:02:38.311876] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.457 [2024-11-26 18:02:38.311914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.028 malloc1 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.028 [2024-11-26 18:02:38.702460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:57.028 [2024-11-26 18:02:38.702569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.028 [2024-11-26 18:02:38.702612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:57.028 [2024-11-26 18:02:38.702646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.028 [2024-11-26 18:02:38.704942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.028 [2024-11-26 18:02:38.705025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:57.028 pt1 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.028 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.029 malloc2 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.029 [2024-11-26 18:02:38.763853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:57.029 [2024-11-26 18:02:38.763952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.029 [2024-11-26 18:02:38.763997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:57.029 [2024-11-26 18:02:38.764042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.029 [2024-11-26 18:02:38.766429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.029 [2024-11-26 18:02:38.766499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:57.029 pt2 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.029 [2024-11-26 18:02:38.775882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:57.029 [2024-11-26 18:02:38.777704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.029 [2024-11-26 18:02:38.777871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:57.029 [2024-11-26 18:02:38.777890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:57.029 [2024-11-26 18:02:38.778226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:57.029 [2024-11-26 18:02:38.778440] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:57.029 [2024-11-26 18:02:38.778496] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:57.029 [2024-11-26 18:02:38.778714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.029 "name": "raid_bdev1", 00:17:57.029 "uuid": "46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f", 00:17:57.029 "strip_size_kb": 0, 00:17:57.029 "state": "online", 00:17:57.029 "raid_level": "raid1", 00:17:57.029 "superblock": true, 00:17:57.029 "num_base_bdevs": 2, 00:17:57.029 "num_base_bdevs_discovered": 2, 00:17:57.029 "num_base_bdevs_operational": 2, 00:17:57.029 "base_bdevs_list": [ 00:17:57.029 { 00:17:57.029 "name": "pt1", 00:17:57.029 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.029 "is_configured": true, 00:17:57.029 "data_offset": 256, 00:17:57.029 "data_size": 7936 00:17:57.029 }, 00:17:57.029 { 00:17:57.029 "name": "pt2", 00:17:57.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.029 "is_configured": true, 00:17:57.029 "data_offset": 256, 00:17:57.029 "data_size": 7936 00:17:57.029 } 00:17:57.029 ] 00:17:57.029 }' 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.029 18:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.596 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:57.596 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:57.596 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:57.596 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:57.596 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:57.596 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:57.596 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.596 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:57.596 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.596 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.596 [2024-11-26 18:02:39.299334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.596 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.596 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:57.596 "name": "raid_bdev1", 00:17:57.596 "aliases": [ 00:17:57.596 "46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f" 00:17:57.596 ], 00:17:57.596 "product_name": "Raid Volume", 00:17:57.596 "block_size": 4096, 00:17:57.596 "num_blocks": 7936, 00:17:57.596 "uuid": "46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f", 00:17:57.596 "assigned_rate_limits": { 00:17:57.596 "rw_ios_per_sec": 0, 00:17:57.596 "rw_mbytes_per_sec": 0, 00:17:57.596 "r_mbytes_per_sec": 0, 00:17:57.596 "w_mbytes_per_sec": 0 00:17:57.596 }, 00:17:57.596 "claimed": false, 00:17:57.596 "zoned": false, 00:17:57.596 "supported_io_types": { 00:17:57.596 "read": true, 00:17:57.596 "write": true, 00:17:57.596 "unmap": false, 00:17:57.596 "flush": false, 00:17:57.596 "reset": true, 00:17:57.596 "nvme_admin": false, 00:17:57.596 "nvme_io": false, 00:17:57.596 "nvme_io_md": false, 00:17:57.596 "write_zeroes": true, 00:17:57.596 "zcopy": false, 00:17:57.596 "get_zone_info": false, 00:17:57.596 "zone_management": false, 00:17:57.596 "zone_append": false, 00:17:57.596 "compare": false, 00:17:57.596 "compare_and_write": false, 00:17:57.596 "abort": false, 00:17:57.596 "seek_hole": false, 00:17:57.596 "seek_data": false, 00:17:57.596 "copy": false, 00:17:57.596 "nvme_iov_md": false 00:17:57.596 }, 00:17:57.596 "memory_domains": [ 00:17:57.596 { 00:17:57.596 "dma_device_id": "system", 00:17:57.596 "dma_device_type": 1 00:17:57.596 }, 00:17:57.596 { 00:17:57.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.596 "dma_device_type": 2 00:17:57.596 }, 00:17:57.596 { 00:17:57.596 "dma_device_id": "system", 00:17:57.596 "dma_device_type": 1 00:17:57.596 }, 00:17:57.596 { 00:17:57.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.596 "dma_device_type": 2 00:17:57.596 } 00:17:57.596 ], 00:17:57.596 "driver_specific": { 00:17:57.596 "raid": { 00:17:57.596 "uuid": "46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f", 00:17:57.596 "strip_size_kb": 0, 00:17:57.596 "state": "online", 00:17:57.596 "raid_level": "raid1", 00:17:57.597 "superblock": true, 00:17:57.597 "num_base_bdevs": 2, 00:17:57.597 "num_base_bdevs_discovered": 2, 00:17:57.597 "num_base_bdevs_operational": 2, 00:17:57.597 "base_bdevs_list": [ 00:17:57.597 { 00:17:57.597 "name": "pt1", 00:17:57.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.597 "is_configured": true, 00:17:57.597 "data_offset": 256, 00:17:57.597 "data_size": 7936 00:17:57.597 }, 00:17:57.597 { 00:17:57.597 "name": "pt2", 00:17:57.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.597 "is_configured": true, 00:17:57.597 "data_offset": 256, 00:17:57.597 "data_size": 7936 00:17:57.597 } 00:17:57.597 ] 00:17:57.597 } 00:17:57.597 } 00:17:57.597 }' 00:17:57.597 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:57.597 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:57.597 pt2' 00:17:57.597 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.597 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:57.597 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.597 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:57.597 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.597 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.597 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:57.857 [2024-11-26 18:02:39.554934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f ']' 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.857 [2024-11-26 18:02:39.602476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.857 [2024-11-26 18:02:39.602509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.857 [2024-11-26 18:02:39.602609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.857 [2024-11-26 18:02:39.602678] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.857 [2024-11-26 18:02:39.602693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.857 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.118 [2024-11-26 18:02:39.738303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:58.118 [2024-11-26 18:02:39.740531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:58.118 [2024-11-26 18:02:39.740608] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:58.118 [2024-11-26 18:02:39.740678] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:58.118 [2024-11-26 18:02:39.740703] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.118 [2024-11-26 18:02:39.740721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:58.118 request: 00:17:58.118 { 00:17:58.118 "name": "raid_bdev1", 00:17:58.118 "raid_level": "raid1", 00:17:58.118 "base_bdevs": [ 00:17:58.118 "malloc1", 00:17:58.118 "malloc2" 00:17:58.118 ], 00:17:58.118 "superblock": false, 00:17:58.118 "method": "bdev_raid_create", 00:17:58.118 "req_id": 1 00:17:58.118 } 00:17:58.118 Got JSON-RPC error response 00:17:58.118 response: 00:17:58.118 { 00:17:58.118 "code": -17, 00:17:58.118 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:58.118 } 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.118 [2024-11-26 18:02:39.806177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:58.118 [2024-11-26 18:02:39.806306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.118 [2024-11-26 18:02:39.806371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:58.118 [2024-11-26 18:02:39.806426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.118 [2024-11-26 18:02:39.809139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.118 [2024-11-26 18:02:39.809229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:58.118 [2024-11-26 18:02:39.809370] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:58.118 [2024-11-26 18:02:39.809484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:58.118 pt1 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.118 "name": "raid_bdev1", 00:17:58.118 "uuid": "46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f", 00:17:58.118 "strip_size_kb": 0, 00:17:58.118 "state": "configuring", 00:17:58.118 "raid_level": "raid1", 00:17:58.118 "superblock": true, 00:17:58.118 "num_base_bdevs": 2, 00:17:58.118 "num_base_bdevs_discovered": 1, 00:17:58.118 "num_base_bdevs_operational": 2, 00:17:58.118 "base_bdevs_list": [ 00:17:58.118 { 00:17:58.118 "name": "pt1", 00:17:58.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.118 "is_configured": true, 00:17:58.118 "data_offset": 256, 00:17:58.118 "data_size": 7936 00:17:58.118 }, 00:17:58.118 { 00:17:58.118 "name": null, 00:17:58.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.118 "is_configured": false, 00:17:58.118 "data_offset": 256, 00:17:58.118 "data_size": 7936 00:17:58.118 } 00:17:58.118 ] 00:17:58.118 }' 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.118 18:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.687 [2024-11-26 18:02:40.301365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.687 [2024-11-26 18:02:40.301457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.687 [2024-11-26 18:02:40.301483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:58.687 [2024-11-26 18:02:40.301516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.687 [2024-11-26 18:02:40.302096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.687 [2024-11-26 18:02:40.302122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.687 [2024-11-26 18:02:40.302222] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:58.687 [2024-11-26 18:02:40.302254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.687 [2024-11-26 18:02:40.302395] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:58.687 [2024-11-26 18:02:40.302409] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:58.687 [2024-11-26 18:02:40.302695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:58.687 [2024-11-26 18:02:40.302952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:58.687 [2024-11-26 18:02:40.302966] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:58.687 [2024-11-26 18:02:40.303155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.687 pt2 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.687 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.687 "name": "raid_bdev1", 00:17:58.687 "uuid": "46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f", 00:17:58.688 "strip_size_kb": 0, 00:17:58.688 "state": "online", 00:17:58.688 "raid_level": "raid1", 00:17:58.688 "superblock": true, 00:17:58.688 "num_base_bdevs": 2, 00:17:58.688 "num_base_bdevs_discovered": 2, 00:17:58.688 "num_base_bdevs_operational": 2, 00:17:58.688 "base_bdevs_list": [ 00:17:58.688 { 00:17:58.688 "name": "pt1", 00:17:58.688 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.688 "is_configured": true, 00:17:58.688 "data_offset": 256, 00:17:58.688 "data_size": 7936 00:17:58.688 }, 00:17:58.688 { 00:17:58.688 "name": "pt2", 00:17:58.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.688 "is_configured": true, 00:17:58.688 "data_offset": 256, 00:17:58.688 "data_size": 7936 00:17:58.688 } 00:17:58.688 ] 00:17:58.688 }' 00:17:58.688 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.688 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.946 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:58.946 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:58.946 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:58.946 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:58.946 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:58.946 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:58.946 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:58.946 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:58.946 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.946 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.946 [2024-11-26 18:02:40.780832] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.946 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.206 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:59.207 "name": "raid_bdev1", 00:17:59.207 "aliases": [ 00:17:59.207 "46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f" 00:17:59.207 ], 00:17:59.207 "product_name": "Raid Volume", 00:17:59.207 "block_size": 4096, 00:17:59.207 "num_blocks": 7936, 00:17:59.207 "uuid": "46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f", 00:17:59.207 "assigned_rate_limits": { 00:17:59.207 "rw_ios_per_sec": 0, 00:17:59.207 "rw_mbytes_per_sec": 0, 00:17:59.207 "r_mbytes_per_sec": 0, 00:17:59.207 "w_mbytes_per_sec": 0 00:17:59.207 }, 00:17:59.207 "claimed": false, 00:17:59.207 "zoned": false, 00:17:59.207 "supported_io_types": { 00:17:59.207 "read": true, 00:17:59.207 "write": true, 00:17:59.207 "unmap": false, 00:17:59.207 "flush": false, 00:17:59.207 "reset": true, 00:17:59.207 "nvme_admin": false, 00:17:59.207 "nvme_io": false, 00:17:59.207 "nvme_io_md": false, 00:17:59.207 "write_zeroes": true, 00:17:59.207 "zcopy": false, 00:17:59.207 "get_zone_info": false, 00:17:59.207 "zone_management": false, 00:17:59.207 "zone_append": false, 00:17:59.207 "compare": false, 00:17:59.207 "compare_and_write": false, 00:17:59.207 "abort": false, 00:17:59.207 "seek_hole": false, 00:17:59.207 "seek_data": false, 00:17:59.207 "copy": false, 00:17:59.207 "nvme_iov_md": false 00:17:59.207 }, 00:17:59.207 "memory_domains": [ 00:17:59.207 { 00:17:59.207 "dma_device_id": "system", 00:17:59.207 "dma_device_type": 1 00:17:59.207 }, 00:17:59.207 { 00:17:59.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.207 "dma_device_type": 2 00:17:59.207 }, 00:17:59.207 { 00:17:59.207 "dma_device_id": "system", 00:17:59.207 "dma_device_type": 1 00:17:59.207 }, 00:17:59.207 { 00:17:59.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.207 "dma_device_type": 2 00:17:59.207 } 00:17:59.207 ], 00:17:59.207 "driver_specific": { 00:17:59.207 "raid": { 00:17:59.207 "uuid": "46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f", 00:17:59.207 "strip_size_kb": 0, 00:17:59.207 "state": "online", 00:17:59.207 "raid_level": "raid1", 00:17:59.207 "superblock": true, 00:17:59.207 "num_base_bdevs": 2, 00:17:59.207 "num_base_bdevs_discovered": 2, 00:17:59.207 "num_base_bdevs_operational": 2, 00:17:59.207 "base_bdevs_list": [ 00:17:59.207 { 00:17:59.207 "name": "pt1", 00:17:59.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.207 "is_configured": true, 00:17:59.207 "data_offset": 256, 00:17:59.207 "data_size": 7936 00:17:59.207 }, 00:17:59.207 { 00:17:59.207 "name": "pt2", 00:17:59.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.207 "is_configured": true, 00:17:59.207 "data_offset": 256, 00:17:59.207 "data_size": 7936 00:17:59.207 } 00:17:59.207 ] 00:17:59.207 } 00:17:59.207 } 00:17:59.207 }' 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:59.207 pt2' 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.207 18:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.207 [2024-11-26 18:02:40.992456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f '!=' 46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f ']' 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.207 [2024-11-26 18:02:41.036215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.207 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.467 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.467 "name": "raid_bdev1", 00:17:59.467 "uuid": "46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f", 00:17:59.467 "strip_size_kb": 0, 00:17:59.467 "state": "online", 00:17:59.467 "raid_level": "raid1", 00:17:59.467 "superblock": true, 00:17:59.467 "num_base_bdevs": 2, 00:17:59.467 "num_base_bdevs_discovered": 1, 00:17:59.467 "num_base_bdevs_operational": 1, 00:17:59.467 "base_bdevs_list": [ 00:17:59.467 { 00:17:59.467 "name": null, 00:17:59.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.467 "is_configured": false, 00:17:59.467 "data_offset": 0, 00:17:59.467 "data_size": 7936 00:17:59.467 }, 00:17:59.467 { 00:17:59.467 "name": "pt2", 00:17:59.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.467 "is_configured": true, 00:17:59.467 "data_offset": 256, 00:17:59.467 "data_size": 7936 00:17:59.467 } 00:17:59.467 ] 00:17:59.467 }' 00:17:59.467 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.467 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.727 [2024-11-26 18:02:41.531293] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.727 [2024-11-26 18:02:41.531325] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.727 [2024-11-26 18:02:41.531414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.727 [2024-11-26 18:02:41.531462] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.727 [2024-11-26 18:02:41.531474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.727 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.988 [2024-11-26 18:02:41.607155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:59.988 [2024-11-26 18:02:41.607217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.988 [2024-11-26 18:02:41.607240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:59.988 [2024-11-26 18:02:41.607256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.988 [2024-11-26 18:02:41.609570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.988 [2024-11-26 18:02:41.609615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:59.988 [2024-11-26 18:02:41.609701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:59.988 [2024-11-26 18:02:41.609767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:59.988 [2024-11-26 18:02:41.609883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:59.988 [2024-11-26 18:02:41.609896] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:59.988 [2024-11-26 18:02:41.610142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:59.988 [2024-11-26 18:02:41.610390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:59.988 [2024-11-26 18:02:41.610406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:59.988 [2024-11-26 18:02:41.610570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.988 pt2 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.988 "name": "raid_bdev1", 00:17:59.988 "uuid": "46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f", 00:17:59.988 "strip_size_kb": 0, 00:17:59.988 "state": "online", 00:17:59.988 "raid_level": "raid1", 00:17:59.988 "superblock": true, 00:17:59.988 "num_base_bdevs": 2, 00:17:59.988 "num_base_bdevs_discovered": 1, 00:17:59.988 "num_base_bdevs_operational": 1, 00:17:59.988 "base_bdevs_list": [ 00:17:59.988 { 00:17:59.988 "name": null, 00:17:59.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.988 "is_configured": false, 00:17:59.988 "data_offset": 256, 00:17:59.988 "data_size": 7936 00:17:59.988 }, 00:17:59.988 { 00:17:59.988 "name": "pt2", 00:17:59.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.988 "is_configured": true, 00:17:59.988 "data_offset": 256, 00:17:59.988 "data_size": 7936 00:17:59.988 } 00:17:59.988 ] 00:17:59.988 }' 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.988 18:02:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.247 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:00.247 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.248 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.248 [2024-11-26 18:02:42.106302] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.248 [2024-11-26 18:02:42.106406] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.248 [2024-11-26 18:02:42.106524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.248 [2024-11-26 18:02:42.106616] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.248 [2024-11-26 18:02:42.106670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.507 [2024-11-26 18:02:42.174249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:00.507 [2024-11-26 18:02:42.174334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.507 [2024-11-26 18:02:42.174360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:00.507 [2024-11-26 18:02:42.174370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.507 [2024-11-26 18:02:42.177029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.507 [2024-11-26 18:02:42.177073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:00.507 [2024-11-26 18:02:42.177181] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:00.507 [2024-11-26 18:02:42.177243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.507 [2024-11-26 18:02:42.177425] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:00.507 [2024-11-26 18:02:42.177445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.507 [2024-11-26 18:02:42.177464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:00.507 [2024-11-26 18:02:42.177560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.507 [2024-11-26 18:02:42.177660] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:00.507 [2024-11-26 18:02:42.177671] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:00.507 [2024-11-26 18:02:42.177973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:00.507 [2024-11-26 18:02:42.178183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:00.507 [2024-11-26 18:02:42.178216] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:00.507 [2024-11-26 18:02:42.178450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.507 pt1 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.507 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.508 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.508 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.508 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.508 "name": "raid_bdev1", 00:18:00.508 "uuid": "46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f", 00:18:00.508 "strip_size_kb": 0, 00:18:00.508 "state": "online", 00:18:00.508 "raid_level": "raid1", 00:18:00.508 "superblock": true, 00:18:00.508 "num_base_bdevs": 2, 00:18:00.508 "num_base_bdevs_discovered": 1, 00:18:00.508 "num_base_bdevs_operational": 1, 00:18:00.508 "base_bdevs_list": [ 00:18:00.508 { 00:18:00.508 "name": null, 00:18:00.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.508 "is_configured": false, 00:18:00.508 "data_offset": 256, 00:18:00.508 "data_size": 7936 00:18:00.508 }, 00:18:00.508 { 00:18:00.508 "name": "pt2", 00:18:00.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.508 "is_configured": true, 00:18:00.508 "data_offset": 256, 00:18:00.508 "data_size": 7936 00:18:00.508 } 00:18:00.508 ] 00:18:00.508 }' 00:18:00.508 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.508 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.766 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:00.766 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.766 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.024 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:01.024 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.024 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:01.024 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:01.024 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.024 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.024 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.024 [2024-11-26 18:02:42.669970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.024 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.024 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f '!=' 46c0bcb8-6a3e-40a6-a0e0-3fa90a3eff7f ']' 00:18:01.024 18:02:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86611 00:18:01.024 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86611 ']' 00:18:01.024 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86611 00:18:01.024 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:18:01.024 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.024 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86611 00:18:01.024 killing process with pid 86611 00:18:01.025 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.025 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.025 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86611' 00:18:01.025 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86611 00:18:01.025 [2024-11-26 18:02:42.722441] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:01.025 [2024-11-26 18:02:42.722551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.025 18:02:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86611 00:18:01.025 [2024-11-26 18:02:42.722605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.025 [2024-11-26 18:02:42.722623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:01.284 [2024-11-26 18:02:42.971789] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:02.665 18:02:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:02.665 00:18:02.665 real 0m6.571s 00:18:02.665 user 0m9.946s 00:18:02.665 sys 0m1.090s 00:18:02.665 18:02:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.665 ************************************ 00:18:02.665 END TEST raid_superblock_test_4k 00:18:02.665 ************************************ 00:18:02.665 18:02:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.665 18:02:44 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:02.665 18:02:44 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:02.665 18:02:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:02.665 18:02:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.665 18:02:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:02.665 ************************************ 00:18:02.665 START TEST raid_rebuild_test_sb_4k 00:18:02.665 ************************************ 00:18:02.665 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:02.665 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:02.665 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:02.665 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:02.665 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:02.665 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86941 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86941 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86941 ']' 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.666 18:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.666 [2024-11-26 18:02:44.450735] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:18:02.666 [2024-11-26 18:02:44.451062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86941 ] 00:18:02.666 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:02.666 Zero copy mechanism will not be used. 00:18:02.924 [2024-11-26 18:02:44.634329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.924 [2024-11-26 18:02:44.766408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.206 [2024-11-26 18:02:45.000658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.206 [2024-11-26 18:02:45.000840] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.782 BaseBdev1_malloc 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.782 [2024-11-26 18:02:45.396424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:03.782 [2024-11-26 18:02:45.396511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.782 [2024-11-26 18:02:45.396539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:03.782 [2024-11-26 18:02:45.396552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.782 [2024-11-26 18:02:45.399234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.782 [2024-11-26 18:02:45.399282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:03.782 BaseBdev1 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.782 BaseBdev2_malloc 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.782 [2024-11-26 18:02:45.453577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:03.782 [2024-11-26 18:02:45.453667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.782 [2024-11-26 18:02:45.453696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:03.782 [2024-11-26 18:02:45.453709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.782 [2024-11-26 18:02:45.456178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.782 [2024-11-26 18:02:45.456223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:03.782 BaseBdev2 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.782 spare_malloc 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.782 spare_delay 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.782 [2024-11-26 18:02:45.537021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:03.782 [2024-11-26 18:02:45.537105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.782 [2024-11-26 18:02:45.537131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:03.782 [2024-11-26 18:02:45.537143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.782 [2024-11-26 18:02:45.539700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.782 [2024-11-26 18:02:45.539801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:03.782 spare 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.782 [2024-11-26 18:02:45.549062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:03.782 [2024-11-26 18:02:45.551116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:03.782 [2024-11-26 18:02:45.551325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:03.782 [2024-11-26 18:02:45.551350] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:03.782 [2024-11-26 18:02:45.551680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:03.782 [2024-11-26 18:02:45.551869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:03.782 [2024-11-26 18:02:45.551880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:03.782 [2024-11-26 18:02:45.552104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.782 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.783 "name": "raid_bdev1", 00:18:03.783 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:03.783 "strip_size_kb": 0, 00:18:03.783 "state": "online", 00:18:03.783 "raid_level": "raid1", 00:18:03.783 "superblock": true, 00:18:03.783 "num_base_bdevs": 2, 00:18:03.783 "num_base_bdevs_discovered": 2, 00:18:03.783 "num_base_bdevs_operational": 2, 00:18:03.783 "base_bdevs_list": [ 00:18:03.783 { 00:18:03.783 "name": "BaseBdev1", 00:18:03.783 "uuid": "03d0ce30-f070-596e-b0b0-58cdbdb6755a", 00:18:03.783 "is_configured": true, 00:18:03.783 "data_offset": 256, 00:18:03.783 "data_size": 7936 00:18:03.783 }, 00:18:03.783 { 00:18:03.783 "name": "BaseBdev2", 00:18:03.783 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:03.783 "is_configured": true, 00:18:03.783 "data_offset": 256, 00:18:03.783 "data_size": 7936 00:18:03.783 } 00:18:03.783 ] 00:18:03.783 }' 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.783 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.353 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.353 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.353 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.353 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:04.353 [2024-11-26 18:02:45.936723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.353 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.353 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:04.353 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.353 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:04.353 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.353 18:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.353 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.353 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:04.353 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:04.353 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:04.353 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:04.353 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:04.353 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:04.353 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:04.353 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:04.353 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:04.353 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:04.353 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:04.353 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:04.353 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:04.353 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:04.613 [2024-11-26 18:02:46.303856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:04.613 /dev/nbd0 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.613 1+0 records in 00:18:04.613 1+0 records out 00:18:04.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317173 s, 12.9 MB/s 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:04.613 18:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:05.551 7936+0 records in 00:18:05.551 7936+0 records out 00:18:05.551 32505856 bytes (33 MB, 31 MiB) copied, 0.783928 s, 41.5 MB/s 00:18:05.551 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:05.551 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:05.551 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:05.551 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:05.551 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:05.551 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.551 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:05.811 [2024-11-26 18:02:47.421454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.811 [2024-11-26 18:02:47.441567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.811 "name": "raid_bdev1", 00:18:05.811 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:05.811 "strip_size_kb": 0, 00:18:05.811 "state": "online", 00:18:05.811 "raid_level": "raid1", 00:18:05.811 "superblock": true, 00:18:05.811 "num_base_bdevs": 2, 00:18:05.811 "num_base_bdevs_discovered": 1, 00:18:05.811 "num_base_bdevs_operational": 1, 00:18:05.811 "base_bdevs_list": [ 00:18:05.811 { 00:18:05.811 "name": null, 00:18:05.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.811 "is_configured": false, 00:18:05.811 "data_offset": 0, 00:18:05.811 "data_size": 7936 00:18:05.811 }, 00:18:05.811 { 00:18:05.811 "name": "BaseBdev2", 00:18:05.811 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:05.811 "is_configured": true, 00:18:05.811 "data_offset": 256, 00:18:05.811 "data_size": 7936 00:18:05.811 } 00:18:05.811 ] 00:18:05.811 }' 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.811 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.072 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:06.072 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.072 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.072 [2024-11-26 18:02:47.900811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.072 [2024-11-26 18:02:47.919736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:06.072 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.072 18:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:06.072 [2024-11-26 18:02:47.922146] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.449 18:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.449 18:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.449 18:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.449 18:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.449 18:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.449 18:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.449 18:02:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.449 18:02:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.449 18:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.449 18:02:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.449 18:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.449 "name": "raid_bdev1", 00:18:07.449 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:07.449 "strip_size_kb": 0, 00:18:07.449 "state": "online", 00:18:07.449 "raid_level": "raid1", 00:18:07.449 "superblock": true, 00:18:07.449 "num_base_bdevs": 2, 00:18:07.449 "num_base_bdevs_discovered": 2, 00:18:07.449 "num_base_bdevs_operational": 2, 00:18:07.449 "process": { 00:18:07.449 "type": "rebuild", 00:18:07.449 "target": "spare", 00:18:07.449 "progress": { 00:18:07.449 "blocks": 2560, 00:18:07.449 "percent": 32 00:18:07.449 } 00:18:07.449 }, 00:18:07.449 "base_bdevs_list": [ 00:18:07.449 { 00:18:07.449 "name": "spare", 00:18:07.449 "uuid": "57611404-19f4-5291-b1f8-1ebc7a4e5117", 00:18:07.449 "is_configured": true, 00:18:07.449 "data_offset": 256, 00:18:07.449 "data_size": 7936 00:18:07.449 }, 00:18:07.449 { 00:18:07.449 "name": "BaseBdev2", 00:18:07.449 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:07.449 "is_configured": true, 00:18:07.449 "data_offset": 256, 00:18:07.449 "data_size": 7936 00:18:07.449 } 00:18:07.449 ] 00:18:07.449 }' 00:18:07.449 18:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.449 [2024-11-26 18:02:49.073700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.449 [2024-11-26 18:02:49.128878] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:07.449 [2024-11-26 18:02:49.128990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.449 [2024-11-26 18:02:49.129009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.449 [2024-11-26 18:02:49.129038] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.449 "name": "raid_bdev1", 00:18:07.449 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:07.449 "strip_size_kb": 0, 00:18:07.449 "state": "online", 00:18:07.449 "raid_level": "raid1", 00:18:07.449 "superblock": true, 00:18:07.449 "num_base_bdevs": 2, 00:18:07.449 "num_base_bdevs_discovered": 1, 00:18:07.449 "num_base_bdevs_operational": 1, 00:18:07.449 "base_bdevs_list": [ 00:18:07.449 { 00:18:07.449 "name": null, 00:18:07.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.449 "is_configured": false, 00:18:07.449 "data_offset": 0, 00:18:07.449 "data_size": 7936 00:18:07.449 }, 00:18:07.449 { 00:18:07.449 "name": "BaseBdev2", 00:18:07.449 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:07.449 "is_configured": true, 00:18:07.449 "data_offset": 256, 00:18:07.449 "data_size": 7936 00:18:07.449 } 00:18:07.449 ] 00:18:07.449 }' 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.449 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.018 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.018 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.019 "name": "raid_bdev1", 00:18:08.019 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:08.019 "strip_size_kb": 0, 00:18:08.019 "state": "online", 00:18:08.019 "raid_level": "raid1", 00:18:08.019 "superblock": true, 00:18:08.019 "num_base_bdevs": 2, 00:18:08.019 "num_base_bdevs_discovered": 1, 00:18:08.019 "num_base_bdevs_operational": 1, 00:18:08.019 "base_bdevs_list": [ 00:18:08.019 { 00:18:08.019 "name": null, 00:18:08.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.019 "is_configured": false, 00:18:08.019 "data_offset": 0, 00:18:08.019 "data_size": 7936 00:18:08.019 }, 00:18:08.019 { 00:18:08.019 "name": "BaseBdev2", 00:18:08.019 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:08.019 "is_configured": true, 00:18:08.019 "data_offset": 256, 00:18:08.019 "data_size": 7936 00:18:08.019 } 00:18:08.019 ] 00:18:08.019 }' 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.019 [2024-11-26 18:02:49.802716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:08.019 [2024-11-26 18:02:49.821836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.019 18:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:08.019 [2024-11-26 18:02:49.824078] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.397 "name": "raid_bdev1", 00:18:09.397 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:09.397 "strip_size_kb": 0, 00:18:09.397 "state": "online", 00:18:09.397 "raid_level": "raid1", 00:18:09.397 "superblock": true, 00:18:09.397 "num_base_bdevs": 2, 00:18:09.397 "num_base_bdevs_discovered": 2, 00:18:09.397 "num_base_bdevs_operational": 2, 00:18:09.397 "process": { 00:18:09.397 "type": "rebuild", 00:18:09.397 "target": "spare", 00:18:09.397 "progress": { 00:18:09.397 "blocks": 2560, 00:18:09.397 "percent": 32 00:18:09.397 } 00:18:09.397 }, 00:18:09.397 "base_bdevs_list": [ 00:18:09.397 { 00:18:09.397 "name": "spare", 00:18:09.397 "uuid": "57611404-19f4-5291-b1f8-1ebc7a4e5117", 00:18:09.397 "is_configured": true, 00:18:09.397 "data_offset": 256, 00:18:09.397 "data_size": 7936 00:18:09.397 }, 00:18:09.397 { 00:18:09.397 "name": "BaseBdev2", 00:18:09.397 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:09.397 "is_configured": true, 00:18:09.397 "data_offset": 256, 00:18:09.397 "data_size": 7936 00:18:09.397 } 00:18:09.397 ] 00:18:09.397 }' 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:09.397 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=711 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.397 18:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.397 18:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.397 "name": "raid_bdev1", 00:18:09.397 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:09.397 "strip_size_kb": 0, 00:18:09.397 "state": "online", 00:18:09.397 "raid_level": "raid1", 00:18:09.397 "superblock": true, 00:18:09.397 "num_base_bdevs": 2, 00:18:09.397 "num_base_bdevs_discovered": 2, 00:18:09.397 "num_base_bdevs_operational": 2, 00:18:09.397 "process": { 00:18:09.397 "type": "rebuild", 00:18:09.397 "target": "spare", 00:18:09.397 "progress": { 00:18:09.397 "blocks": 2816, 00:18:09.397 "percent": 35 00:18:09.397 } 00:18:09.397 }, 00:18:09.397 "base_bdevs_list": [ 00:18:09.397 { 00:18:09.397 "name": "spare", 00:18:09.397 "uuid": "57611404-19f4-5291-b1f8-1ebc7a4e5117", 00:18:09.397 "is_configured": true, 00:18:09.397 "data_offset": 256, 00:18:09.397 "data_size": 7936 00:18:09.397 }, 00:18:09.397 { 00:18:09.397 "name": "BaseBdev2", 00:18:09.397 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:09.397 "is_configured": true, 00:18:09.397 "data_offset": 256, 00:18:09.397 "data_size": 7936 00:18:09.397 } 00:18:09.397 ] 00:18:09.397 }' 00:18:09.397 18:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.397 18:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.397 18:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.397 18:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.397 18:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:10.334 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:10.334 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.334 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.334 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.334 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.334 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.334 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.334 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.334 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.334 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.334 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.335 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.335 "name": "raid_bdev1", 00:18:10.335 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:10.335 "strip_size_kb": 0, 00:18:10.335 "state": "online", 00:18:10.335 "raid_level": "raid1", 00:18:10.335 "superblock": true, 00:18:10.335 "num_base_bdevs": 2, 00:18:10.335 "num_base_bdevs_discovered": 2, 00:18:10.335 "num_base_bdevs_operational": 2, 00:18:10.335 "process": { 00:18:10.335 "type": "rebuild", 00:18:10.335 "target": "spare", 00:18:10.335 "progress": { 00:18:10.335 "blocks": 5632, 00:18:10.335 "percent": 70 00:18:10.335 } 00:18:10.335 }, 00:18:10.335 "base_bdevs_list": [ 00:18:10.335 { 00:18:10.335 "name": "spare", 00:18:10.335 "uuid": "57611404-19f4-5291-b1f8-1ebc7a4e5117", 00:18:10.335 "is_configured": true, 00:18:10.335 "data_offset": 256, 00:18:10.335 "data_size": 7936 00:18:10.335 }, 00:18:10.335 { 00:18:10.335 "name": "BaseBdev2", 00:18:10.335 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:10.335 "is_configured": true, 00:18:10.335 "data_offset": 256, 00:18:10.335 "data_size": 7936 00:18:10.335 } 00:18:10.335 ] 00:18:10.335 }' 00:18:10.335 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.594 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.594 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.594 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.594 18:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:11.163 [2024-11-26 18:02:52.940340] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:11.163 [2024-11-26 18:02:52.940574] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:11.163 [2024-11-26 18:02:52.940819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.423 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:11.423 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.423 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.423 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.423 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.423 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.423 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.423 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.423 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.423 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.423 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.683 "name": "raid_bdev1", 00:18:11.683 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:11.683 "strip_size_kb": 0, 00:18:11.683 "state": "online", 00:18:11.683 "raid_level": "raid1", 00:18:11.683 "superblock": true, 00:18:11.683 "num_base_bdevs": 2, 00:18:11.683 "num_base_bdevs_discovered": 2, 00:18:11.683 "num_base_bdevs_operational": 2, 00:18:11.683 "base_bdevs_list": [ 00:18:11.683 { 00:18:11.683 "name": "spare", 00:18:11.683 "uuid": "57611404-19f4-5291-b1f8-1ebc7a4e5117", 00:18:11.683 "is_configured": true, 00:18:11.683 "data_offset": 256, 00:18:11.683 "data_size": 7936 00:18:11.683 }, 00:18:11.683 { 00:18:11.683 "name": "BaseBdev2", 00:18:11.683 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:11.683 "is_configured": true, 00:18:11.683 "data_offset": 256, 00:18:11.683 "data_size": 7936 00:18:11.683 } 00:18:11.683 ] 00:18:11.683 }' 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.683 "name": "raid_bdev1", 00:18:11.683 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:11.683 "strip_size_kb": 0, 00:18:11.683 "state": "online", 00:18:11.683 "raid_level": "raid1", 00:18:11.683 "superblock": true, 00:18:11.683 "num_base_bdevs": 2, 00:18:11.683 "num_base_bdevs_discovered": 2, 00:18:11.683 "num_base_bdevs_operational": 2, 00:18:11.683 "base_bdevs_list": [ 00:18:11.683 { 00:18:11.683 "name": "spare", 00:18:11.683 "uuid": "57611404-19f4-5291-b1f8-1ebc7a4e5117", 00:18:11.683 "is_configured": true, 00:18:11.683 "data_offset": 256, 00:18:11.683 "data_size": 7936 00:18:11.683 }, 00:18:11.683 { 00:18:11.683 "name": "BaseBdev2", 00:18:11.683 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:11.683 "is_configured": true, 00:18:11.683 "data_offset": 256, 00:18:11.683 "data_size": 7936 00:18:11.683 } 00:18:11.683 ] 00:18:11.683 }' 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:11.683 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.942 "name": "raid_bdev1", 00:18:11.942 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:11.942 "strip_size_kb": 0, 00:18:11.942 "state": "online", 00:18:11.942 "raid_level": "raid1", 00:18:11.942 "superblock": true, 00:18:11.942 "num_base_bdevs": 2, 00:18:11.942 "num_base_bdevs_discovered": 2, 00:18:11.942 "num_base_bdevs_operational": 2, 00:18:11.942 "base_bdevs_list": [ 00:18:11.942 { 00:18:11.942 "name": "spare", 00:18:11.942 "uuid": "57611404-19f4-5291-b1f8-1ebc7a4e5117", 00:18:11.942 "is_configured": true, 00:18:11.942 "data_offset": 256, 00:18:11.942 "data_size": 7936 00:18:11.942 }, 00:18:11.942 { 00:18:11.942 "name": "BaseBdev2", 00:18:11.942 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:11.942 "is_configured": true, 00:18:11.942 "data_offset": 256, 00:18:11.942 "data_size": 7936 00:18:11.942 } 00:18:11.942 ] 00:18:11.942 }' 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.942 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.201 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:12.201 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.201 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.201 [2024-11-26 18:02:53.992453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.201 [2024-11-26 18:02:53.992491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.201 [2024-11-26 18:02:53.992603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.201 [2024-11-26 18:02:53.992689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.201 [2024-11-26 18:02:53.992704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:12.201 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.201 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:12.201 18:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:12.201 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:12.459 /dev/nbd0 00:18:12.459 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:12.459 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:12.719 1+0 records in 00:18:12.719 1+0 records out 00:18:12.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612798 s, 6.7 MB/s 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:12.719 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:12.979 /dev/nbd1 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:12.979 1+0 records in 00:18:12.979 1+0 records out 00:18:12.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446877 s, 9.2 MB/s 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:12.979 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:13.239 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:13.239 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:13.239 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:13.240 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:13.240 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:13.240 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:13.240 18:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:13.500 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:13.500 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:13.500 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:13.500 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:13.500 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:13.500 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:13.500 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:13.500 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:13.500 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:13.500 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.776 [2024-11-26 18:02:55.466423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:13.776 [2024-11-26 18:02:55.466576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.776 [2024-11-26 18:02:55.466631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:13.776 [2024-11-26 18:02:55.466672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.776 [2024-11-26 18:02:55.469388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.776 [2024-11-26 18:02:55.469499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:13.776 [2024-11-26 18:02:55.469702] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:13.776 [2024-11-26 18:02:55.469829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:13.776 [2024-11-26 18:02:55.470089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:13.776 spare 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.776 [2024-11-26 18:02:55.570071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:13.776 [2024-11-26 18:02:55.570238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:13.776 [2024-11-26 18:02:55.570671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:13.776 [2024-11-26 18:02:55.570985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:13.776 [2024-11-26 18:02:55.571060] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:13.776 [2024-11-26 18:02:55.571396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.776 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.049 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.049 "name": "raid_bdev1", 00:18:14.049 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:14.049 "strip_size_kb": 0, 00:18:14.049 "state": "online", 00:18:14.049 "raid_level": "raid1", 00:18:14.049 "superblock": true, 00:18:14.049 "num_base_bdevs": 2, 00:18:14.049 "num_base_bdevs_discovered": 2, 00:18:14.049 "num_base_bdevs_operational": 2, 00:18:14.049 "base_bdevs_list": [ 00:18:14.049 { 00:18:14.049 "name": "spare", 00:18:14.049 "uuid": "57611404-19f4-5291-b1f8-1ebc7a4e5117", 00:18:14.049 "is_configured": true, 00:18:14.049 "data_offset": 256, 00:18:14.049 "data_size": 7936 00:18:14.049 }, 00:18:14.049 { 00:18:14.049 "name": "BaseBdev2", 00:18:14.049 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:14.049 "is_configured": true, 00:18:14.049 "data_offset": 256, 00:18:14.049 "data_size": 7936 00:18:14.049 } 00:18:14.049 ] 00:18:14.049 }' 00:18:14.049 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.049 18:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.309 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.309 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.309 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:14.309 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:14.309 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.309 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.309 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.309 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.309 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.309 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.309 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.309 "name": "raid_bdev1", 00:18:14.309 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:14.309 "strip_size_kb": 0, 00:18:14.309 "state": "online", 00:18:14.309 "raid_level": "raid1", 00:18:14.309 "superblock": true, 00:18:14.309 "num_base_bdevs": 2, 00:18:14.309 "num_base_bdevs_discovered": 2, 00:18:14.309 "num_base_bdevs_operational": 2, 00:18:14.309 "base_bdevs_list": [ 00:18:14.309 { 00:18:14.309 "name": "spare", 00:18:14.309 "uuid": "57611404-19f4-5291-b1f8-1ebc7a4e5117", 00:18:14.309 "is_configured": true, 00:18:14.309 "data_offset": 256, 00:18:14.309 "data_size": 7936 00:18:14.309 }, 00:18:14.309 { 00:18:14.309 "name": "BaseBdev2", 00:18:14.309 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:14.309 "is_configured": true, 00:18:14.309 "data_offset": 256, 00:18:14.309 "data_size": 7936 00:18:14.309 } 00:18:14.309 ] 00:18:14.309 }' 00:18:14.309 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.309 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.309 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.568 [2024-11-26 18:02:56.274270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.568 "name": "raid_bdev1", 00:18:14.568 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:14.568 "strip_size_kb": 0, 00:18:14.568 "state": "online", 00:18:14.568 "raid_level": "raid1", 00:18:14.568 "superblock": true, 00:18:14.568 "num_base_bdevs": 2, 00:18:14.568 "num_base_bdevs_discovered": 1, 00:18:14.568 "num_base_bdevs_operational": 1, 00:18:14.568 "base_bdevs_list": [ 00:18:14.568 { 00:18:14.568 "name": null, 00:18:14.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.568 "is_configured": false, 00:18:14.568 "data_offset": 0, 00:18:14.568 "data_size": 7936 00:18:14.568 }, 00:18:14.568 { 00:18:14.568 "name": "BaseBdev2", 00:18:14.568 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:14.568 "is_configured": true, 00:18:14.568 "data_offset": 256, 00:18:14.568 "data_size": 7936 00:18:14.568 } 00:18:14.568 ] 00:18:14.568 }' 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.568 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.135 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:15.135 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.135 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.135 [2024-11-26 18:02:56.777573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.135 [2024-11-26 18:02:56.777904] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:15.135 [2024-11-26 18:02:56.777986] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:15.135 [2024-11-26 18:02:56.778139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.135 [2024-11-26 18:02:56.796982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:15.135 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.135 18:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:15.135 [2024-11-26 18:02:56.799358] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:16.071 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.071 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.071 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.071 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.071 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.071 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.071 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.071 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.071 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.071 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.071 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.071 "name": "raid_bdev1", 00:18:16.071 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:16.071 "strip_size_kb": 0, 00:18:16.071 "state": "online", 00:18:16.071 "raid_level": "raid1", 00:18:16.071 "superblock": true, 00:18:16.071 "num_base_bdevs": 2, 00:18:16.071 "num_base_bdevs_discovered": 2, 00:18:16.071 "num_base_bdevs_operational": 2, 00:18:16.071 "process": { 00:18:16.071 "type": "rebuild", 00:18:16.071 "target": "spare", 00:18:16.071 "progress": { 00:18:16.071 "blocks": 2560, 00:18:16.071 "percent": 32 00:18:16.071 } 00:18:16.071 }, 00:18:16.071 "base_bdevs_list": [ 00:18:16.071 { 00:18:16.071 "name": "spare", 00:18:16.071 "uuid": "57611404-19f4-5291-b1f8-1ebc7a4e5117", 00:18:16.071 "is_configured": true, 00:18:16.071 "data_offset": 256, 00:18:16.071 "data_size": 7936 00:18:16.071 }, 00:18:16.071 { 00:18:16.071 "name": "BaseBdev2", 00:18:16.071 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:16.071 "is_configured": true, 00:18:16.071 "data_offset": 256, 00:18:16.071 "data_size": 7936 00:18:16.071 } 00:18:16.071 ] 00:18:16.071 }' 00:18:16.071 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.071 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.071 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.331 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.331 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:16.331 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.331 18:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.331 [2024-11-26 18:02:57.966735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.331 [2024-11-26 18:02:58.005843] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:16.331 [2024-11-26 18:02:58.006092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.331 [2024-11-26 18:02:58.006116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.331 [2024-11-26 18:02:58.006130] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.331 "name": "raid_bdev1", 00:18:16.331 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:16.331 "strip_size_kb": 0, 00:18:16.331 "state": "online", 00:18:16.331 "raid_level": "raid1", 00:18:16.331 "superblock": true, 00:18:16.331 "num_base_bdevs": 2, 00:18:16.331 "num_base_bdevs_discovered": 1, 00:18:16.331 "num_base_bdevs_operational": 1, 00:18:16.331 "base_bdevs_list": [ 00:18:16.331 { 00:18:16.331 "name": null, 00:18:16.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.331 "is_configured": false, 00:18:16.331 "data_offset": 0, 00:18:16.331 "data_size": 7936 00:18:16.331 }, 00:18:16.331 { 00:18:16.331 "name": "BaseBdev2", 00:18:16.331 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:16.331 "is_configured": true, 00:18:16.331 "data_offset": 256, 00:18:16.331 "data_size": 7936 00:18:16.331 } 00:18:16.331 ] 00:18:16.331 }' 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.331 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.899 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:16.900 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.900 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.900 [2024-11-26 18:02:58.558869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:16.900 [2024-11-26 18:02:58.559034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.900 [2024-11-26 18:02:58.559086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:16.900 [2024-11-26 18:02:58.559145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.900 [2024-11-26 18:02:58.559724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.900 [2024-11-26 18:02:58.559806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:16.900 [2024-11-26 18:02:58.559967] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:16.900 [2024-11-26 18:02:58.560038] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:16.900 [2024-11-26 18:02:58.560091] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:16.900 [2024-11-26 18:02:58.560154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.900 [2024-11-26 18:02:58.579657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:16.900 spare 00:18:16.900 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.900 18:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:16.900 [2024-11-26 18:02:58.582041] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:17.840 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.840 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.840 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.840 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.840 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.840 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.840 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.840 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.840 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.840 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.840 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.840 "name": "raid_bdev1", 00:18:17.840 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:17.840 "strip_size_kb": 0, 00:18:17.840 "state": "online", 00:18:17.840 "raid_level": "raid1", 00:18:17.840 "superblock": true, 00:18:17.840 "num_base_bdevs": 2, 00:18:17.840 "num_base_bdevs_discovered": 2, 00:18:17.840 "num_base_bdevs_operational": 2, 00:18:17.840 "process": { 00:18:17.840 "type": "rebuild", 00:18:17.840 "target": "spare", 00:18:17.840 "progress": { 00:18:17.840 "blocks": 2560, 00:18:17.840 "percent": 32 00:18:17.840 } 00:18:17.840 }, 00:18:17.840 "base_bdevs_list": [ 00:18:17.840 { 00:18:17.840 "name": "spare", 00:18:17.840 "uuid": "57611404-19f4-5291-b1f8-1ebc7a4e5117", 00:18:17.840 "is_configured": true, 00:18:17.840 "data_offset": 256, 00:18:17.840 "data_size": 7936 00:18:17.840 }, 00:18:17.840 { 00:18:17.840 "name": "BaseBdev2", 00:18:17.840 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:17.840 "is_configured": true, 00:18:17.840 "data_offset": 256, 00:18:17.840 "data_size": 7936 00:18:17.840 } 00:18:17.840 ] 00:18:17.840 }' 00:18:17.840 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.840 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.099 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.099 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.100 [2024-11-26 18:02:59.757782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.100 [2024-11-26 18:02:59.788567] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:18.100 [2024-11-26 18:02:59.788765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.100 [2024-11-26 18:02:59.788823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.100 [2024-11-26 18:02:59.788859] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.100 "name": "raid_bdev1", 00:18:18.100 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:18.100 "strip_size_kb": 0, 00:18:18.100 "state": "online", 00:18:18.100 "raid_level": "raid1", 00:18:18.100 "superblock": true, 00:18:18.100 "num_base_bdevs": 2, 00:18:18.100 "num_base_bdevs_discovered": 1, 00:18:18.100 "num_base_bdevs_operational": 1, 00:18:18.100 "base_bdevs_list": [ 00:18:18.100 { 00:18:18.100 "name": null, 00:18:18.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.100 "is_configured": false, 00:18:18.100 "data_offset": 0, 00:18:18.100 "data_size": 7936 00:18:18.100 }, 00:18:18.100 { 00:18:18.100 "name": "BaseBdev2", 00:18:18.100 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:18.100 "is_configured": true, 00:18:18.100 "data_offset": 256, 00:18:18.100 "data_size": 7936 00:18:18.100 } 00:18:18.100 ] 00:18:18.100 }' 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.100 18:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.669 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:18.669 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.669 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:18.669 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:18.669 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.669 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.669 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.669 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.670 "name": "raid_bdev1", 00:18:18.670 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:18.670 "strip_size_kb": 0, 00:18:18.670 "state": "online", 00:18:18.670 "raid_level": "raid1", 00:18:18.670 "superblock": true, 00:18:18.670 "num_base_bdevs": 2, 00:18:18.670 "num_base_bdevs_discovered": 1, 00:18:18.670 "num_base_bdevs_operational": 1, 00:18:18.670 "base_bdevs_list": [ 00:18:18.670 { 00:18:18.670 "name": null, 00:18:18.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.670 "is_configured": false, 00:18:18.670 "data_offset": 0, 00:18:18.670 "data_size": 7936 00:18:18.670 }, 00:18:18.670 { 00:18:18.670 "name": "BaseBdev2", 00:18:18.670 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:18.670 "is_configured": true, 00:18:18.670 "data_offset": 256, 00:18:18.670 "data_size": 7936 00:18:18.670 } 00:18:18.670 ] 00:18:18.670 }' 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.670 [2024-11-26 18:03:00.460361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:18.670 [2024-11-26 18:03:00.460441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.670 [2024-11-26 18:03:00.460478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:18.670 [2024-11-26 18:03:00.460503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.670 [2024-11-26 18:03:00.461076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.670 [2024-11-26 18:03:00.461100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:18.670 [2024-11-26 18:03:00.461212] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:18.670 [2024-11-26 18:03:00.461236] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:18.670 [2024-11-26 18:03:00.461250] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:18.670 [2024-11-26 18:03:00.461262] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:18.670 BaseBdev1 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.670 18:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.051 "name": "raid_bdev1", 00:18:20.051 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:20.051 "strip_size_kb": 0, 00:18:20.051 "state": "online", 00:18:20.051 "raid_level": "raid1", 00:18:20.051 "superblock": true, 00:18:20.051 "num_base_bdevs": 2, 00:18:20.051 "num_base_bdevs_discovered": 1, 00:18:20.051 "num_base_bdevs_operational": 1, 00:18:20.051 "base_bdevs_list": [ 00:18:20.051 { 00:18:20.051 "name": null, 00:18:20.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.051 "is_configured": false, 00:18:20.051 "data_offset": 0, 00:18:20.051 "data_size": 7936 00:18:20.051 }, 00:18:20.051 { 00:18:20.051 "name": "BaseBdev2", 00:18:20.051 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:20.051 "is_configured": true, 00:18:20.051 "data_offset": 256, 00:18:20.051 "data_size": 7936 00:18:20.051 } 00:18:20.051 ] 00:18:20.051 }' 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.051 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.309 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.309 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.309 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.309 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.309 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.309 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.309 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.309 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.309 18:03:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.309 "name": "raid_bdev1", 00:18:20.309 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:20.309 "strip_size_kb": 0, 00:18:20.309 "state": "online", 00:18:20.309 "raid_level": "raid1", 00:18:20.309 "superblock": true, 00:18:20.309 "num_base_bdevs": 2, 00:18:20.309 "num_base_bdevs_discovered": 1, 00:18:20.309 "num_base_bdevs_operational": 1, 00:18:20.309 "base_bdevs_list": [ 00:18:20.309 { 00:18:20.309 "name": null, 00:18:20.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.309 "is_configured": false, 00:18:20.309 "data_offset": 0, 00:18:20.309 "data_size": 7936 00:18:20.309 }, 00:18:20.309 { 00:18:20.309 "name": "BaseBdev2", 00:18:20.309 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:20.309 "is_configured": true, 00:18:20.309 "data_offset": 256, 00:18:20.309 "data_size": 7936 00:18:20.309 } 00:18:20.309 ] 00:18:20.309 }' 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.309 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.309 [2024-11-26 18:03:02.137728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.309 [2024-11-26 18:03:02.137947] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:20.310 [2024-11-26 18:03:02.137969] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:20.310 request: 00:18:20.310 { 00:18:20.310 "base_bdev": "BaseBdev1", 00:18:20.310 "raid_bdev": "raid_bdev1", 00:18:20.310 "method": "bdev_raid_add_base_bdev", 00:18:20.310 "req_id": 1 00:18:20.310 } 00:18:20.310 Got JSON-RPC error response 00:18:20.310 response: 00:18:20.310 { 00:18:20.310 "code": -22, 00:18:20.310 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:20.310 } 00:18:20.310 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:20.310 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:18:20.310 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.310 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.310 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.310 18:03:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.686 "name": "raid_bdev1", 00:18:21.686 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:21.686 "strip_size_kb": 0, 00:18:21.686 "state": "online", 00:18:21.686 "raid_level": "raid1", 00:18:21.686 "superblock": true, 00:18:21.686 "num_base_bdevs": 2, 00:18:21.686 "num_base_bdevs_discovered": 1, 00:18:21.686 "num_base_bdevs_operational": 1, 00:18:21.686 "base_bdevs_list": [ 00:18:21.686 { 00:18:21.686 "name": null, 00:18:21.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.686 "is_configured": false, 00:18:21.686 "data_offset": 0, 00:18:21.686 "data_size": 7936 00:18:21.686 }, 00:18:21.686 { 00:18:21.686 "name": "BaseBdev2", 00:18:21.686 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:21.686 "is_configured": true, 00:18:21.686 "data_offset": 256, 00:18:21.686 "data_size": 7936 00:18:21.686 } 00:18:21.686 ] 00:18:21.686 }' 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.686 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.946 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:21.946 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.946 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:21.946 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:21.946 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.946 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.946 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.946 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.946 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.946 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.946 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.946 "name": "raid_bdev1", 00:18:21.946 "uuid": "00ee5c94-13e1-40f3-8df2-842efc1140c9", 00:18:21.946 "strip_size_kb": 0, 00:18:21.946 "state": "online", 00:18:21.946 "raid_level": "raid1", 00:18:21.946 "superblock": true, 00:18:21.946 "num_base_bdevs": 2, 00:18:21.946 "num_base_bdevs_discovered": 1, 00:18:21.946 "num_base_bdevs_operational": 1, 00:18:21.946 "base_bdevs_list": [ 00:18:21.946 { 00:18:21.946 "name": null, 00:18:21.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.946 "is_configured": false, 00:18:21.946 "data_offset": 0, 00:18:21.946 "data_size": 7936 00:18:21.946 }, 00:18:21.946 { 00:18:21.946 "name": "BaseBdev2", 00:18:21.946 "uuid": "7d565cf0-3f47-5695-a93e-37d55c37fbe1", 00:18:21.946 "is_configured": true, 00:18:21.946 "data_offset": 256, 00:18:21.946 "data_size": 7936 00:18:21.947 } 00:18:21.947 ] 00:18:21.947 }' 00:18:21.947 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.947 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:21.947 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.947 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:21.947 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86941 00:18:21.947 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86941 ']' 00:18:21.947 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86941 00:18:21.947 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:21.947 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.947 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86941 00:18:21.947 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.947 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.947 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86941' 00:18:21.947 killing process with pid 86941 00:18:21.947 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86941 00:18:21.947 Received shutdown signal, test time was about 60.000000 seconds 00:18:21.947 00:18:21.947 Latency(us) 00:18:21.947 [2024-11-26T18:03:03.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.947 [2024-11-26T18:03:03.810Z] =================================================================================================================== 00:18:21.947 [2024-11-26T18:03:03.810Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:21.947 18:03:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86941 00:18:21.947 [2024-11-26 18:03:03.746627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:21.947 [2024-11-26 18:03:03.746820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.947 [2024-11-26 18:03:03.746909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.947 [2024-11-26 18:03:03.746966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:22.514 [2024-11-26 18:03:04.091412] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:23.892 ************************************ 00:18:23.892 END TEST raid_rebuild_test_sb_4k 00:18:23.892 ************************************ 00:18:23.892 18:03:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:23.892 00:18:23.892 real 0m21.036s 00:18:23.892 user 0m27.572s 00:18:23.892 sys 0m2.868s 00:18:23.892 18:03:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:23.892 18:03:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.892 18:03:05 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:23.892 18:03:05 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:23.892 18:03:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:23.892 18:03:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.892 18:03:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:23.892 ************************************ 00:18:23.892 START TEST raid_state_function_test_sb_md_separate 00:18:23.892 ************************************ 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:23.892 Process raid pid: 87644 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87644 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87644' 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87644 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87644 ']' 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.892 18:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.892 [2024-11-26 18:03:05.556279] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:18:23.892 [2024-11-26 18:03:05.556667] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.152 [2024-11-26 18:03:05.755074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.152 [2024-11-26 18:03:05.924068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.411 [2024-11-26 18:03:06.159147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.411 [2024-11-26 18:03:06.159281] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.672 [2024-11-26 18:03:06.455057] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:24.672 [2024-11-26 18:03:06.455220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:24.672 [2024-11-26 18:03:06.455262] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:24.672 [2024-11-26 18:03:06.455293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.672 "name": "Existed_Raid", 00:18:24.672 "uuid": "3ee0c8bb-9834-4ec2-a2dc-1e8da7eb289a", 00:18:24.672 "strip_size_kb": 0, 00:18:24.672 "state": "configuring", 00:18:24.672 "raid_level": "raid1", 00:18:24.672 "superblock": true, 00:18:24.672 "num_base_bdevs": 2, 00:18:24.672 "num_base_bdevs_discovered": 0, 00:18:24.672 "num_base_bdevs_operational": 2, 00:18:24.672 "base_bdevs_list": [ 00:18:24.672 { 00:18:24.672 "name": "BaseBdev1", 00:18:24.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.672 "is_configured": false, 00:18:24.672 "data_offset": 0, 00:18:24.672 "data_size": 0 00:18:24.672 }, 00:18:24.672 { 00:18:24.672 "name": "BaseBdev2", 00:18:24.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.672 "is_configured": false, 00:18:24.672 "data_offset": 0, 00:18:24.672 "data_size": 0 00:18:24.672 } 00:18:24.672 ] 00:18:24.672 }' 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.672 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.241 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:25.241 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.241 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.241 [2024-11-26 18:03:06.962131] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:25.241 [2024-11-26 18:03:06.962179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:25.241 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.241 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:25.241 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.241 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.241 [2024-11-26 18:03:06.974119] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:25.241 [2024-11-26 18:03:06.974229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:25.241 [2024-11-26 18:03:06.974245] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:25.241 [2024-11-26 18:03:06.974260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:25.241 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.241 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:25.241 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.241 18:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.241 [2024-11-26 18:03:07.027736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:25.241 BaseBdev1 00:18:25.241 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.241 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:25.241 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:25.241 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:25.241 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:25.241 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.242 [ 00:18:25.242 { 00:18:25.242 "name": "BaseBdev1", 00:18:25.242 "aliases": [ 00:18:25.242 "0ec69cd5-90f5-49ba-a3cb-de33fca8b815" 00:18:25.242 ], 00:18:25.242 "product_name": "Malloc disk", 00:18:25.242 "block_size": 4096, 00:18:25.242 "num_blocks": 8192, 00:18:25.242 "uuid": "0ec69cd5-90f5-49ba-a3cb-de33fca8b815", 00:18:25.242 "md_size": 32, 00:18:25.242 "md_interleave": false, 00:18:25.242 "dif_type": 0, 00:18:25.242 "assigned_rate_limits": { 00:18:25.242 "rw_ios_per_sec": 0, 00:18:25.242 "rw_mbytes_per_sec": 0, 00:18:25.242 "r_mbytes_per_sec": 0, 00:18:25.242 "w_mbytes_per_sec": 0 00:18:25.242 }, 00:18:25.242 "claimed": true, 00:18:25.242 "claim_type": "exclusive_write", 00:18:25.242 "zoned": false, 00:18:25.242 "supported_io_types": { 00:18:25.242 "read": true, 00:18:25.242 "write": true, 00:18:25.242 "unmap": true, 00:18:25.242 "flush": true, 00:18:25.242 "reset": true, 00:18:25.242 "nvme_admin": false, 00:18:25.242 "nvme_io": false, 00:18:25.242 "nvme_io_md": false, 00:18:25.242 "write_zeroes": true, 00:18:25.242 "zcopy": true, 00:18:25.242 "get_zone_info": false, 00:18:25.242 "zone_management": false, 00:18:25.242 "zone_append": false, 00:18:25.242 "compare": false, 00:18:25.242 "compare_and_write": false, 00:18:25.242 "abort": true, 00:18:25.242 "seek_hole": false, 00:18:25.242 "seek_data": false, 00:18:25.242 "copy": true, 00:18:25.242 "nvme_iov_md": false 00:18:25.242 }, 00:18:25.242 "memory_domains": [ 00:18:25.242 { 00:18:25.242 "dma_device_id": "system", 00:18:25.242 "dma_device_type": 1 00:18:25.242 }, 00:18:25.242 { 00:18:25.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.242 "dma_device_type": 2 00:18:25.242 } 00:18:25.242 ], 00:18:25.242 "driver_specific": {} 00:18:25.242 } 00:18:25.242 ] 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.242 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.505 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.505 "name": "Existed_Raid", 00:18:25.505 "uuid": "8b1c4dd3-065f-498f-b80e-ec9fc1f0b3e7", 00:18:25.505 "strip_size_kb": 0, 00:18:25.505 "state": "configuring", 00:18:25.505 "raid_level": "raid1", 00:18:25.505 "superblock": true, 00:18:25.505 "num_base_bdevs": 2, 00:18:25.505 "num_base_bdevs_discovered": 1, 00:18:25.505 "num_base_bdevs_operational": 2, 00:18:25.505 "base_bdevs_list": [ 00:18:25.505 { 00:18:25.505 "name": "BaseBdev1", 00:18:25.505 "uuid": "0ec69cd5-90f5-49ba-a3cb-de33fca8b815", 00:18:25.505 "is_configured": true, 00:18:25.505 "data_offset": 256, 00:18:25.505 "data_size": 7936 00:18:25.505 }, 00:18:25.505 { 00:18:25.505 "name": "BaseBdev2", 00:18:25.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.505 "is_configured": false, 00:18:25.505 "data_offset": 0, 00:18:25.505 "data_size": 0 00:18:25.505 } 00:18:25.505 ] 00:18:25.505 }' 00:18:25.505 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.505 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.771 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:25.771 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.771 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.771 [2024-11-26 18:03:07.519009] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:25.771 [2024-11-26 18:03:07.519173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:25.771 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.771 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:25.771 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.771 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.771 [2024-11-26 18:03:07.531020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:25.771 [2024-11-26 18:03:07.533169] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:25.771 [2024-11-26 18:03:07.533276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:25.771 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.771 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:25.771 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:25.771 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:25.771 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.771 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.772 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.772 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.772 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.772 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.772 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.772 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.772 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.772 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.772 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.772 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.772 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.772 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.772 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.772 "name": "Existed_Raid", 00:18:25.772 "uuid": "4b171071-4c57-4f10-b96a-d28d2543119c", 00:18:25.772 "strip_size_kb": 0, 00:18:25.772 "state": "configuring", 00:18:25.772 "raid_level": "raid1", 00:18:25.772 "superblock": true, 00:18:25.772 "num_base_bdevs": 2, 00:18:25.772 "num_base_bdevs_discovered": 1, 00:18:25.772 "num_base_bdevs_operational": 2, 00:18:25.772 "base_bdevs_list": [ 00:18:25.772 { 00:18:25.772 "name": "BaseBdev1", 00:18:25.772 "uuid": "0ec69cd5-90f5-49ba-a3cb-de33fca8b815", 00:18:25.772 "is_configured": true, 00:18:25.772 "data_offset": 256, 00:18:25.772 "data_size": 7936 00:18:25.772 }, 00:18:25.772 { 00:18:25.772 "name": "BaseBdev2", 00:18:25.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.772 "is_configured": false, 00:18:25.772 "data_offset": 0, 00:18:25.772 "data_size": 0 00:18:25.772 } 00:18:25.772 ] 00:18:25.772 }' 00:18:25.772 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.772 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.341 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:26.341 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.341 18:03:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.341 [2024-11-26 18:03:08.027466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.341 [2024-11-26 18:03:08.027866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:26.341 [2024-11-26 18:03:08.027931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:26.341 [2024-11-26 18:03:08.028056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:26.341 [2024-11-26 18:03:08.028242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:26.341 [2024-11-26 18:03:08.028292] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:26.341 BaseBdev2 00:18:26.341 [2024-11-26 18:03:08.028451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.341 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.341 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:26.341 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.342 [ 00:18:26.342 { 00:18:26.342 "name": "BaseBdev2", 00:18:26.342 "aliases": [ 00:18:26.342 "8309249e-e542-493c-99fc-51b6e9d412de" 00:18:26.342 ], 00:18:26.342 "product_name": "Malloc disk", 00:18:26.342 "block_size": 4096, 00:18:26.342 "num_blocks": 8192, 00:18:26.342 "uuid": "8309249e-e542-493c-99fc-51b6e9d412de", 00:18:26.342 "md_size": 32, 00:18:26.342 "md_interleave": false, 00:18:26.342 "dif_type": 0, 00:18:26.342 "assigned_rate_limits": { 00:18:26.342 "rw_ios_per_sec": 0, 00:18:26.342 "rw_mbytes_per_sec": 0, 00:18:26.342 "r_mbytes_per_sec": 0, 00:18:26.342 "w_mbytes_per_sec": 0 00:18:26.342 }, 00:18:26.342 "claimed": true, 00:18:26.342 "claim_type": "exclusive_write", 00:18:26.342 "zoned": false, 00:18:26.342 "supported_io_types": { 00:18:26.342 "read": true, 00:18:26.342 "write": true, 00:18:26.342 "unmap": true, 00:18:26.342 "flush": true, 00:18:26.342 "reset": true, 00:18:26.342 "nvme_admin": false, 00:18:26.342 "nvme_io": false, 00:18:26.342 "nvme_io_md": false, 00:18:26.342 "write_zeroes": true, 00:18:26.342 "zcopy": true, 00:18:26.342 "get_zone_info": false, 00:18:26.342 "zone_management": false, 00:18:26.342 "zone_append": false, 00:18:26.342 "compare": false, 00:18:26.342 "compare_and_write": false, 00:18:26.342 "abort": true, 00:18:26.342 "seek_hole": false, 00:18:26.342 "seek_data": false, 00:18:26.342 "copy": true, 00:18:26.342 "nvme_iov_md": false 00:18:26.342 }, 00:18:26.342 "memory_domains": [ 00:18:26.342 { 00:18:26.342 "dma_device_id": "system", 00:18:26.342 "dma_device_type": 1 00:18:26.342 }, 00:18:26.342 { 00:18:26.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.342 "dma_device_type": 2 00:18:26.342 } 00:18:26.342 ], 00:18:26.342 "driver_specific": {} 00:18:26.342 } 00:18:26.342 ] 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.342 "name": "Existed_Raid", 00:18:26.342 "uuid": "4b171071-4c57-4f10-b96a-d28d2543119c", 00:18:26.342 "strip_size_kb": 0, 00:18:26.342 "state": "online", 00:18:26.342 "raid_level": "raid1", 00:18:26.342 "superblock": true, 00:18:26.342 "num_base_bdevs": 2, 00:18:26.342 "num_base_bdevs_discovered": 2, 00:18:26.342 "num_base_bdevs_operational": 2, 00:18:26.342 "base_bdevs_list": [ 00:18:26.342 { 00:18:26.342 "name": "BaseBdev1", 00:18:26.342 "uuid": "0ec69cd5-90f5-49ba-a3cb-de33fca8b815", 00:18:26.342 "is_configured": true, 00:18:26.342 "data_offset": 256, 00:18:26.342 "data_size": 7936 00:18:26.342 }, 00:18:26.342 { 00:18:26.342 "name": "BaseBdev2", 00:18:26.342 "uuid": "8309249e-e542-493c-99fc-51b6e9d412de", 00:18:26.342 "is_configured": true, 00:18:26.342 "data_offset": 256, 00:18:26.342 "data_size": 7936 00:18:26.342 } 00:18:26.342 ] 00:18:26.342 }' 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.342 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.912 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:26.912 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:26.912 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:26.912 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:26.912 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:26.912 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:26.912 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:26.912 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:26.912 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.912 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.912 [2024-11-26 18:03:08.527070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.912 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.912 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:26.912 "name": "Existed_Raid", 00:18:26.912 "aliases": [ 00:18:26.912 "4b171071-4c57-4f10-b96a-d28d2543119c" 00:18:26.912 ], 00:18:26.912 "product_name": "Raid Volume", 00:18:26.912 "block_size": 4096, 00:18:26.912 "num_blocks": 7936, 00:18:26.912 "uuid": "4b171071-4c57-4f10-b96a-d28d2543119c", 00:18:26.912 "md_size": 32, 00:18:26.912 "md_interleave": false, 00:18:26.912 "dif_type": 0, 00:18:26.912 "assigned_rate_limits": { 00:18:26.912 "rw_ios_per_sec": 0, 00:18:26.912 "rw_mbytes_per_sec": 0, 00:18:26.912 "r_mbytes_per_sec": 0, 00:18:26.912 "w_mbytes_per_sec": 0 00:18:26.912 }, 00:18:26.912 "claimed": false, 00:18:26.912 "zoned": false, 00:18:26.912 "supported_io_types": { 00:18:26.912 "read": true, 00:18:26.912 "write": true, 00:18:26.912 "unmap": false, 00:18:26.912 "flush": false, 00:18:26.912 "reset": true, 00:18:26.912 "nvme_admin": false, 00:18:26.912 "nvme_io": false, 00:18:26.912 "nvme_io_md": false, 00:18:26.912 "write_zeroes": true, 00:18:26.912 "zcopy": false, 00:18:26.912 "get_zone_info": false, 00:18:26.912 "zone_management": false, 00:18:26.912 "zone_append": false, 00:18:26.912 "compare": false, 00:18:26.913 "compare_and_write": false, 00:18:26.913 "abort": false, 00:18:26.913 "seek_hole": false, 00:18:26.913 "seek_data": false, 00:18:26.913 "copy": false, 00:18:26.913 "nvme_iov_md": false 00:18:26.913 }, 00:18:26.913 "memory_domains": [ 00:18:26.913 { 00:18:26.913 "dma_device_id": "system", 00:18:26.913 "dma_device_type": 1 00:18:26.913 }, 00:18:26.913 { 00:18:26.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.913 "dma_device_type": 2 00:18:26.913 }, 00:18:26.913 { 00:18:26.913 "dma_device_id": "system", 00:18:26.913 "dma_device_type": 1 00:18:26.913 }, 00:18:26.913 { 00:18:26.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.913 "dma_device_type": 2 00:18:26.913 } 00:18:26.913 ], 00:18:26.913 "driver_specific": { 00:18:26.913 "raid": { 00:18:26.913 "uuid": "4b171071-4c57-4f10-b96a-d28d2543119c", 00:18:26.913 "strip_size_kb": 0, 00:18:26.913 "state": "online", 00:18:26.913 "raid_level": "raid1", 00:18:26.913 "superblock": true, 00:18:26.913 "num_base_bdevs": 2, 00:18:26.913 "num_base_bdevs_discovered": 2, 00:18:26.913 "num_base_bdevs_operational": 2, 00:18:26.913 "base_bdevs_list": [ 00:18:26.913 { 00:18:26.913 "name": "BaseBdev1", 00:18:26.913 "uuid": "0ec69cd5-90f5-49ba-a3cb-de33fca8b815", 00:18:26.913 "is_configured": true, 00:18:26.913 "data_offset": 256, 00:18:26.913 "data_size": 7936 00:18:26.913 }, 00:18:26.913 { 00:18:26.913 "name": "BaseBdev2", 00:18:26.913 "uuid": "8309249e-e542-493c-99fc-51b6e9d412de", 00:18:26.913 "is_configured": true, 00:18:26.913 "data_offset": 256, 00:18:26.913 "data_size": 7936 00:18:26.913 } 00:18:26.913 ] 00:18:26.913 } 00:18:26.913 } 00:18:26.913 }' 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:26.913 BaseBdev2' 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.913 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.172 [2024-11-26 18:03:08.774346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.172 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.172 "name": "Existed_Raid", 00:18:27.172 "uuid": "4b171071-4c57-4f10-b96a-d28d2543119c", 00:18:27.172 "strip_size_kb": 0, 00:18:27.173 "state": "online", 00:18:27.173 "raid_level": "raid1", 00:18:27.173 "superblock": true, 00:18:27.173 "num_base_bdevs": 2, 00:18:27.173 "num_base_bdevs_discovered": 1, 00:18:27.173 "num_base_bdevs_operational": 1, 00:18:27.173 "base_bdevs_list": [ 00:18:27.173 { 00:18:27.173 "name": null, 00:18:27.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.173 "is_configured": false, 00:18:27.173 "data_offset": 0, 00:18:27.173 "data_size": 7936 00:18:27.173 }, 00:18:27.173 { 00:18:27.173 "name": "BaseBdev2", 00:18:27.173 "uuid": "8309249e-e542-493c-99fc-51b6e9d412de", 00:18:27.173 "is_configured": true, 00:18:27.173 "data_offset": 256, 00:18:27.173 "data_size": 7936 00:18:27.173 } 00:18:27.173 ] 00:18:27.173 }' 00:18:27.173 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.173 18:03:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.742 [2024-11-26 18:03:09.354557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:27.742 [2024-11-26 18:03:09.354771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:27.742 [2024-11-26 18:03:09.471637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.742 [2024-11-26 18:03:09.471796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.742 [2024-11-26 18:03:09.471849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87644 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87644 ']' 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87644 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87644 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87644' 00:18:27.742 killing process with pid 87644 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87644 00:18:27.742 [2024-11-26 18:03:09.568252] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:27.742 18:03:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87644 00:18:27.742 [2024-11-26 18:03:09.587162] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.124 18:03:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:29.124 00:18:29.124 real 0m5.410s 00:18:29.124 user 0m7.724s 00:18:29.124 sys 0m0.917s 00:18:29.124 18:03:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:29.124 ************************************ 00:18:29.124 END TEST raid_state_function_test_sb_md_separate 00:18:29.124 ************************************ 00:18:29.124 18:03:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.124 18:03:10 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:29.124 18:03:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:29.124 18:03:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:29.124 18:03:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.124 ************************************ 00:18:29.124 START TEST raid_superblock_test_md_separate 00:18:29.124 ************************************ 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87891 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87891 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87891 ']' 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.124 18:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.384 [2024-11-26 18:03:11.000287] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:18:29.384 [2024-11-26 18:03:11.000492] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87891 ] 00:18:29.384 [2024-11-26 18:03:11.177547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.643 [2024-11-26 18:03:11.304827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.902 [2024-11-26 18:03:11.516514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:29.902 [2024-11-26 18:03:11.516687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.163 malloc1 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.163 [2024-11-26 18:03:11.947395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:30.163 [2024-11-26 18:03:11.947465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.163 [2024-11-26 18:03:11.947489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:30.163 [2024-11-26 18:03:11.947500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.163 [2024-11-26 18:03:11.949677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.163 [2024-11-26 18:03:11.949722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:30.163 pt1 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.163 malloc2 00:18:30.163 18:03:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.163 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:30.163 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.163 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.163 [2024-11-26 18:03:12.007577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:30.163 [2024-11-26 18:03:12.007688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.163 [2024-11-26 18:03:12.007748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:30.163 [2024-11-26 18:03:12.007792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.163 [2024-11-26 18:03:12.009950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.163 [2024-11-26 18:03:12.010026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:30.163 pt2 00:18:30.163 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.163 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:30.163 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:30.163 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:30.163 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.163 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.163 [2024-11-26 18:03:12.019581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:30.163 [2024-11-26 18:03:12.021600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:30.163 [2024-11-26 18:03:12.021845] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:30.163 [2024-11-26 18:03:12.021900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:30.163 [2024-11-26 18:03:12.022005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:30.163 [2024-11-26 18:03:12.022192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:30.163 [2024-11-26 18:03:12.022238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:30.163 [2024-11-26 18:03:12.022406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.163 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.423 "name": "raid_bdev1", 00:18:30.423 "uuid": "8d841cc6-a1f7-4430-8f9c-a2196c039e11", 00:18:30.423 "strip_size_kb": 0, 00:18:30.423 "state": "online", 00:18:30.423 "raid_level": "raid1", 00:18:30.423 "superblock": true, 00:18:30.423 "num_base_bdevs": 2, 00:18:30.423 "num_base_bdevs_discovered": 2, 00:18:30.423 "num_base_bdevs_operational": 2, 00:18:30.423 "base_bdevs_list": [ 00:18:30.423 { 00:18:30.423 "name": "pt1", 00:18:30.423 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:30.423 "is_configured": true, 00:18:30.423 "data_offset": 256, 00:18:30.423 "data_size": 7936 00:18:30.423 }, 00:18:30.423 { 00:18:30.423 "name": "pt2", 00:18:30.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.423 "is_configured": true, 00:18:30.423 "data_offset": 256, 00:18:30.423 "data_size": 7936 00:18:30.423 } 00:18:30.423 ] 00:18:30.423 }' 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.423 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.754 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:30.754 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:30.754 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:30.754 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:30.754 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:30.754 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:30.754 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:30.754 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:30.754 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.754 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.754 [2024-11-26 18:03:12.503141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.754 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.754 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:30.754 "name": "raid_bdev1", 00:18:30.754 "aliases": [ 00:18:30.754 "8d841cc6-a1f7-4430-8f9c-a2196c039e11" 00:18:30.754 ], 00:18:30.754 "product_name": "Raid Volume", 00:18:30.754 "block_size": 4096, 00:18:30.754 "num_blocks": 7936, 00:18:30.754 "uuid": "8d841cc6-a1f7-4430-8f9c-a2196c039e11", 00:18:30.754 "md_size": 32, 00:18:30.754 "md_interleave": false, 00:18:30.754 "dif_type": 0, 00:18:30.754 "assigned_rate_limits": { 00:18:30.754 "rw_ios_per_sec": 0, 00:18:30.754 "rw_mbytes_per_sec": 0, 00:18:30.754 "r_mbytes_per_sec": 0, 00:18:30.754 "w_mbytes_per_sec": 0 00:18:30.754 }, 00:18:30.754 "claimed": false, 00:18:30.754 "zoned": false, 00:18:30.754 "supported_io_types": { 00:18:30.754 "read": true, 00:18:30.754 "write": true, 00:18:30.754 "unmap": false, 00:18:30.754 "flush": false, 00:18:30.754 "reset": true, 00:18:30.754 "nvme_admin": false, 00:18:30.754 "nvme_io": false, 00:18:30.754 "nvme_io_md": false, 00:18:30.754 "write_zeroes": true, 00:18:30.754 "zcopy": false, 00:18:30.754 "get_zone_info": false, 00:18:30.754 "zone_management": false, 00:18:30.754 "zone_append": false, 00:18:30.754 "compare": false, 00:18:30.754 "compare_and_write": false, 00:18:30.754 "abort": false, 00:18:30.754 "seek_hole": false, 00:18:30.754 "seek_data": false, 00:18:30.754 "copy": false, 00:18:30.754 "nvme_iov_md": false 00:18:30.754 }, 00:18:30.754 "memory_domains": [ 00:18:30.754 { 00:18:30.754 "dma_device_id": "system", 00:18:30.754 "dma_device_type": 1 00:18:30.754 }, 00:18:30.754 { 00:18:30.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.754 "dma_device_type": 2 00:18:30.754 }, 00:18:30.754 { 00:18:30.754 "dma_device_id": "system", 00:18:30.754 "dma_device_type": 1 00:18:30.754 }, 00:18:30.754 { 00:18:30.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.754 "dma_device_type": 2 00:18:30.754 } 00:18:30.754 ], 00:18:30.754 "driver_specific": { 00:18:30.754 "raid": { 00:18:30.754 "uuid": "8d841cc6-a1f7-4430-8f9c-a2196c039e11", 00:18:30.754 "strip_size_kb": 0, 00:18:30.754 "state": "online", 00:18:30.754 "raid_level": "raid1", 00:18:30.754 "superblock": true, 00:18:30.754 "num_base_bdevs": 2, 00:18:30.754 "num_base_bdevs_discovered": 2, 00:18:30.754 "num_base_bdevs_operational": 2, 00:18:30.754 "base_bdevs_list": [ 00:18:30.754 { 00:18:30.754 "name": "pt1", 00:18:30.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:30.754 "is_configured": true, 00:18:30.754 "data_offset": 256, 00:18:30.754 "data_size": 7936 00:18:30.754 }, 00:18:30.754 { 00:18:30.754 "name": "pt2", 00:18:30.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.754 "is_configured": true, 00:18:30.754 "data_offset": 256, 00:18:30.754 "data_size": 7936 00:18:30.754 } 00:18:30.754 ] 00:18:30.754 } 00:18:30.754 } 00:18:30.754 }' 00:18:30.754 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:31.030 pt2' 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:31.030 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.031 [2024-11-26 18:03:12.758742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8d841cc6-a1f7-4430-8f9c-a2196c039e11 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 8d841cc6-a1f7-4430-8f9c-a2196c039e11 ']' 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.031 [2024-11-26 18:03:12.802298] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.031 [2024-11-26 18:03:12.802331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.031 [2024-11-26 18:03:12.802439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.031 [2024-11-26 18:03:12.802508] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.031 [2024-11-26 18:03:12.802523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:31.031 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.291 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.291 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:31.291 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:31.291 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:31.291 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:31.291 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.292 [2024-11-26 18:03:12.930164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:31.292 [2024-11-26 18:03:12.932351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:31.292 [2024-11-26 18:03:12.932487] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:31.292 [2024-11-26 18:03:12.932609] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:31.292 [2024-11-26 18:03:12.932705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.292 [2024-11-26 18:03:12.932741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:31.292 request: 00:18:31.292 { 00:18:31.292 "name": "raid_bdev1", 00:18:31.292 "raid_level": "raid1", 00:18:31.292 "base_bdevs": [ 00:18:31.292 "malloc1", 00:18:31.292 "malloc2" 00:18:31.292 ], 00:18:31.292 "superblock": false, 00:18:31.292 "method": "bdev_raid_create", 00:18:31.292 "req_id": 1 00:18:31.292 } 00:18:31.292 Got JSON-RPC error response 00:18:31.292 response: 00:18:31.292 { 00:18:31.292 "code": -17, 00:18:31.292 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:31.292 } 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.292 [2024-11-26 18:03:12.994031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:31.292 [2024-11-26 18:03:12.994162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.292 [2024-11-26 18:03:12.994201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:31.292 [2024-11-26 18:03:12.994238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.292 [2024-11-26 18:03:12.996477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.292 [2024-11-26 18:03:12.996560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:31.292 [2024-11-26 18:03:12.996651] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:31.292 [2024-11-26 18:03:12.996751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:31.292 pt1 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.292 18:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.292 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.292 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.292 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.292 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.292 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.292 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.292 18:03:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.292 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.292 18:03:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.292 18:03:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.292 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.292 "name": "raid_bdev1", 00:18:31.292 "uuid": "8d841cc6-a1f7-4430-8f9c-a2196c039e11", 00:18:31.292 "strip_size_kb": 0, 00:18:31.292 "state": "configuring", 00:18:31.292 "raid_level": "raid1", 00:18:31.292 "superblock": true, 00:18:31.292 "num_base_bdevs": 2, 00:18:31.292 "num_base_bdevs_discovered": 1, 00:18:31.292 "num_base_bdevs_operational": 2, 00:18:31.292 "base_bdevs_list": [ 00:18:31.292 { 00:18:31.292 "name": "pt1", 00:18:31.292 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.292 "is_configured": true, 00:18:31.292 "data_offset": 256, 00:18:31.292 "data_size": 7936 00:18:31.292 }, 00:18:31.292 { 00:18:31.292 "name": null, 00:18:31.292 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.292 "is_configured": false, 00:18:31.292 "data_offset": 256, 00:18:31.292 "data_size": 7936 00:18:31.292 } 00:18:31.292 ] 00:18:31.292 }' 00:18:31.292 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.292 18:03:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.860 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:31.860 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:31.860 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:31.860 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:31.860 18:03:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.860 18:03:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.860 [2024-11-26 18:03:13.497179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:31.860 [2024-11-26 18:03:13.497365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.860 [2024-11-26 18:03:13.497398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:31.860 [2024-11-26 18:03:13.497412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.860 [2024-11-26 18:03:13.497714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.860 [2024-11-26 18:03:13.497737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:31.860 [2024-11-26 18:03:13.497803] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:31.860 [2024-11-26 18:03:13.497831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:31.860 [2024-11-26 18:03:13.497968] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:31.860 [2024-11-26 18:03:13.497980] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:31.861 [2024-11-26 18:03:13.498085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:31.861 [2024-11-26 18:03:13.498220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:31.861 [2024-11-26 18:03:13.498235] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:31.861 [2024-11-26 18:03:13.498369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.861 pt2 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.861 "name": "raid_bdev1", 00:18:31.861 "uuid": "8d841cc6-a1f7-4430-8f9c-a2196c039e11", 00:18:31.861 "strip_size_kb": 0, 00:18:31.861 "state": "online", 00:18:31.861 "raid_level": "raid1", 00:18:31.861 "superblock": true, 00:18:31.861 "num_base_bdevs": 2, 00:18:31.861 "num_base_bdevs_discovered": 2, 00:18:31.861 "num_base_bdevs_operational": 2, 00:18:31.861 "base_bdevs_list": [ 00:18:31.861 { 00:18:31.861 "name": "pt1", 00:18:31.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.861 "is_configured": true, 00:18:31.861 "data_offset": 256, 00:18:31.861 "data_size": 7936 00:18:31.861 }, 00:18:31.861 { 00:18:31.861 "name": "pt2", 00:18:31.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.861 "is_configured": true, 00:18:31.861 "data_offset": 256, 00:18:31.861 "data_size": 7936 00:18:31.861 } 00:18:31.861 ] 00:18:31.861 }' 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.861 18:03:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.120 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:32.120 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:32.120 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:32.120 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:32.120 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:32.120 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:32.120 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:32.120 18:03:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.120 18:03:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.120 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:32.120 [2024-11-26 18:03:13.940772] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.120 18:03:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.120 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:32.120 "name": "raid_bdev1", 00:18:32.120 "aliases": [ 00:18:32.120 "8d841cc6-a1f7-4430-8f9c-a2196c039e11" 00:18:32.120 ], 00:18:32.120 "product_name": "Raid Volume", 00:18:32.120 "block_size": 4096, 00:18:32.120 "num_blocks": 7936, 00:18:32.120 "uuid": "8d841cc6-a1f7-4430-8f9c-a2196c039e11", 00:18:32.120 "md_size": 32, 00:18:32.120 "md_interleave": false, 00:18:32.120 "dif_type": 0, 00:18:32.120 "assigned_rate_limits": { 00:18:32.120 "rw_ios_per_sec": 0, 00:18:32.120 "rw_mbytes_per_sec": 0, 00:18:32.120 "r_mbytes_per_sec": 0, 00:18:32.120 "w_mbytes_per_sec": 0 00:18:32.120 }, 00:18:32.120 "claimed": false, 00:18:32.120 "zoned": false, 00:18:32.120 "supported_io_types": { 00:18:32.120 "read": true, 00:18:32.120 "write": true, 00:18:32.120 "unmap": false, 00:18:32.120 "flush": false, 00:18:32.120 "reset": true, 00:18:32.120 "nvme_admin": false, 00:18:32.120 "nvme_io": false, 00:18:32.120 "nvme_io_md": false, 00:18:32.120 "write_zeroes": true, 00:18:32.120 "zcopy": false, 00:18:32.120 "get_zone_info": false, 00:18:32.120 "zone_management": false, 00:18:32.120 "zone_append": false, 00:18:32.120 "compare": false, 00:18:32.120 "compare_and_write": false, 00:18:32.120 "abort": false, 00:18:32.120 "seek_hole": false, 00:18:32.120 "seek_data": false, 00:18:32.120 "copy": false, 00:18:32.120 "nvme_iov_md": false 00:18:32.120 }, 00:18:32.120 "memory_domains": [ 00:18:32.120 { 00:18:32.120 "dma_device_id": "system", 00:18:32.120 "dma_device_type": 1 00:18:32.120 }, 00:18:32.120 { 00:18:32.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.120 "dma_device_type": 2 00:18:32.120 }, 00:18:32.120 { 00:18:32.120 "dma_device_id": "system", 00:18:32.120 "dma_device_type": 1 00:18:32.120 }, 00:18:32.120 { 00:18:32.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.120 "dma_device_type": 2 00:18:32.120 } 00:18:32.120 ], 00:18:32.120 "driver_specific": { 00:18:32.120 "raid": { 00:18:32.120 "uuid": "8d841cc6-a1f7-4430-8f9c-a2196c039e11", 00:18:32.120 "strip_size_kb": 0, 00:18:32.120 "state": "online", 00:18:32.120 "raid_level": "raid1", 00:18:32.120 "superblock": true, 00:18:32.120 "num_base_bdevs": 2, 00:18:32.120 "num_base_bdevs_discovered": 2, 00:18:32.120 "num_base_bdevs_operational": 2, 00:18:32.120 "base_bdevs_list": [ 00:18:32.120 { 00:18:32.120 "name": "pt1", 00:18:32.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:32.120 "is_configured": true, 00:18:32.120 "data_offset": 256, 00:18:32.120 "data_size": 7936 00:18:32.120 }, 00:18:32.120 { 00:18:32.120 "name": "pt2", 00:18:32.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.120 "is_configured": true, 00:18:32.120 "data_offset": 256, 00:18:32.120 "data_size": 7936 00:18:32.120 } 00:18:32.120 ] 00:18:32.120 } 00:18:32.120 } 00:18:32.120 }' 00:18:32.380 18:03:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:32.380 pt2' 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:32.380 [2024-11-26 18:03:14.184403] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 8d841cc6-a1f7-4430-8f9c-a2196c039e11 '!=' 8d841cc6-a1f7-4430-8f9c-a2196c039e11 ']' 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.380 [2024-11-26 18:03:14.220086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.380 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.639 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.639 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.639 "name": "raid_bdev1", 00:18:32.639 "uuid": "8d841cc6-a1f7-4430-8f9c-a2196c039e11", 00:18:32.639 "strip_size_kb": 0, 00:18:32.639 "state": "online", 00:18:32.639 "raid_level": "raid1", 00:18:32.639 "superblock": true, 00:18:32.639 "num_base_bdevs": 2, 00:18:32.639 "num_base_bdevs_discovered": 1, 00:18:32.639 "num_base_bdevs_operational": 1, 00:18:32.639 "base_bdevs_list": [ 00:18:32.639 { 00:18:32.639 "name": null, 00:18:32.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.639 "is_configured": false, 00:18:32.639 "data_offset": 0, 00:18:32.639 "data_size": 7936 00:18:32.639 }, 00:18:32.639 { 00:18:32.639 "name": "pt2", 00:18:32.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.639 "is_configured": true, 00:18:32.639 "data_offset": 256, 00:18:32.639 "data_size": 7936 00:18:32.639 } 00:18:32.639 ] 00:18:32.639 }' 00:18:32.639 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.639 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.899 [2024-11-26 18:03:14.659271] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:32.899 [2024-11-26 18:03:14.659366] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:32.899 [2024-11-26 18:03:14.659490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.899 [2024-11-26 18:03:14.659578] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:32.899 [2024-11-26 18:03:14.659636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.899 [2024-11-26 18:03:14.739192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:32.899 [2024-11-26 18:03:14.739275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.899 [2024-11-26 18:03:14.739296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:32.899 [2024-11-26 18:03:14.739310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.899 [2024-11-26 18:03:14.741661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.899 [2024-11-26 18:03:14.741711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:32.899 [2024-11-26 18:03:14.741785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:32.899 [2024-11-26 18:03:14.741849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:32.899 [2024-11-26 18:03:14.741961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:32.899 [2024-11-26 18:03:14.741975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:32.899 [2024-11-26 18:03:14.742086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:32.899 [2024-11-26 18:03:14.742218] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:32.899 [2024-11-26 18:03:14.742279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:32.899 [2024-11-26 18:03:14.742449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.899 pt2 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.899 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.158 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.158 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.158 "name": "raid_bdev1", 00:18:33.158 "uuid": "8d841cc6-a1f7-4430-8f9c-a2196c039e11", 00:18:33.158 "strip_size_kb": 0, 00:18:33.158 "state": "online", 00:18:33.158 "raid_level": "raid1", 00:18:33.158 "superblock": true, 00:18:33.158 "num_base_bdevs": 2, 00:18:33.158 "num_base_bdevs_discovered": 1, 00:18:33.158 "num_base_bdevs_operational": 1, 00:18:33.158 "base_bdevs_list": [ 00:18:33.158 { 00:18:33.158 "name": null, 00:18:33.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.158 "is_configured": false, 00:18:33.158 "data_offset": 256, 00:18:33.158 "data_size": 7936 00:18:33.158 }, 00:18:33.158 { 00:18:33.158 "name": "pt2", 00:18:33.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.158 "is_configured": true, 00:18:33.158 "data_offset": 256, 00:18:33.158 "data_size": 7936 00:18:33.158 } 00:18:33.158 ] 00:18:33.158 }' 00:18:33.158 18:03:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.158 18:03:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.417 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.418 [2024-11-26 18:03:15.214310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.418 [2024-11-26 18:03:15.214423] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.418 [2024-11-26 18:03:15.214550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.418 [2024-11-26 18:03:15.214651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.418 [2024-11-26 18:03:15.214705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.418 [2024-11-26 18:03:15.266266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:33.418 [2024-11-26 18:03:15.266395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.418 [2024-11-26 18:03:15.266454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:33.418 [2024-11-26 18:03:15.266494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.418 [2024-11-26 18:03:15.268828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.418 [2024-11-26 18:03:15.268928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:33.418 [2024-11-26 18:03:15.269057] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:33.418 [2024-11-26 18:03:15.269166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:33.418 [2024-11-26 18:03:15.269381] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:33.418 [2024-11-26 18:03:15.269449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.418 [2024-11-26 18:03:15.269505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:33.418 [2024-11-26 18:03:15.269693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:33.418 [2024-11-26 18:03:15.269826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:33.418 [2024-11-26 18:03:15.269870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:33.418 [2024-11-26 18:03:15.269955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:33.418 [2024-11-26 18:03:15.270102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:33.418 [2024-11-26 18:03:15.270116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:33.418 [2024-11-26 18:03:15.270291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.418 pt1 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.418 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.678 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.678 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.678 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.678 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.678 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.678 "name": "raid_bdev1", 00:18:33.678 "uuid": "8d841cc6-a1f7-4430-8f9c-a2196c039e11", 00:18:33.678 "strip_size_kb": 0, 00:18:33.678 "state": "online", 00:18:33.678 "raid_level": "raid1", 00:18:33.678 "superblock": true, 00:18:33.678 "num_base_bdevs": 2, 00:18:33.678 "num_base_bdevs_discovered": 1, 00:18:33.678 "num_base_bdevs_operational": 1, 00:18:33.678 "base_bdevs_list": [ 00:18:33.678 { 00:18:33.678 "name": null, 00:18:33.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.678 "is_configured": false, 00:18:33.678 "data_offset": 256, 00:18:33.678 "data_size": 7936 00:18:33.678 }, 00:18:33.678 { 00:18:33.678 "name": "pt2", 00:18:33.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.678 "is_configured": true, 00:18:33.678 "data_offset": 256, 00:18:33.678 "data_size": 7936 00:18:33.678 } 00:18:33.678 ] 00:18:33.678 }' 00:18:33.678 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.678 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.937 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:33.937 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.937 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:33.937 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.937 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.937 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:33.937 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:33.937 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:33.937 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.937 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.937 [2024-11-26 18:03:15.781996] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.196 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.196 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 8d841cc6-a1f7-4430-8f9c-a2196c039e11 '!=' 8d841cc6-a1f7-4430-8f9c-a2196c039e11 ']' 00:18:34.196 18:03:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87891 00:18:34.196 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87891 ']' 00:18:34.196 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87891 00:18:34.196 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:34.196 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.196 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87891 00:18:34.196 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.196 killing process with pid 87891 00:18:34.196 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.196 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87891' 00:18:34.196 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87891 00:18:34.196 [2024-11-26 18:03:15.849768] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:34.196 [2024-11-26 18:03:15.849881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.196 18:03:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87891 00:18:34.196 [2024-11-26 18:03:15.849942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.196 [2024-11-26 18:03:15.849963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:34.454 [2024-11-26 18:03:16.115294] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:35.894 18:03:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:35.894 ************************************ 00:18:35.894 END TEST raid_superblock_test_md_separate 00:18:35.894 ************************************ 00:18:35.894 00:18:35.894 real 0m6.480s 00:18:35.894 user 0m9.700s 00:18:35.894 sys 0m1.167s 00:18:35.894 18:03:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.894 18:03:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.894 18:03:17 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:35.894 18:03:17 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:35.894 18:03:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:35.894 18:03:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.894 18:03:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:35.894 ************************************ 00:18:35.894 START TEST raid_rebuild_test_sb_md_separate 00:18:35.894 ************************************ 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:35.894 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:35.895 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:35.895 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:35.895 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88225 00:18:35.895 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:35.895 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88225 00:18:35.895 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88225 ']' 00:18:35.895 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.895 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.895 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.895 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.895 18:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.895 [2024-11-26 18:03:17.560762] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:18:35.895 [2024-11-26 18:03:17.561003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88225 ] 00:18:35.895 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:35.895 Zero copy mechanism will not be used. 00:18:35.895 [2024-11-26 18:03:17.736191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.153 [2024-11-26 18:03:17.859330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.411 [2024-11-26 18:03:18.084571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:36.411 [2024-11-26 18:03:18.084651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:36.670 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.670 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:36.670 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:36.670 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:36.670 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.670 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.670 BaseBdev1_malloc 00:18:36.670 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.670 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:36.670 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.670 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.670 [2024-11-26 18:03:18.513433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:36.670 [2024-11-26 18:03:18.513584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.670 [2024-11-26 18:03:18.513637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:36.670 [2024-11-26 18:03:18.513653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.670 [2024-11-26 18:03:18.515977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.670 [2024-11-26 18:03:18.516031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:36.670 BaseBdev1 00:18:36.670 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.670 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:36.670 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:36.670 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.670 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.929 BaseBdev2_malloc 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.929 [2024-11-26 18:03:18.573163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:36.929 [2024-11-26 18:03:18.573242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.929 [2024-11-26 18:03:18.573266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:36.929 [2024-11-26 18:03:18.573282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.929 [2024-11-26 18:03:18.575538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.929 [2024-11-26 18:03:18.575585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:36.929 BaseBdev2 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.929 spare_malloc 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.929 spare_delay 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.929 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.930 [2024-11-26 18:03:18.654360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:36.930 [2024-11-26 18:03:18.654560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.930 [2024-11-26 18:03:18.654604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:36.930 [2024-11-26 18:03:18.654620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.930 [2024-11-26 18:03:18.656975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.930 [2024-11-26 18:03:18.657045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:36.930 spare 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.930 [2024-11-26 18:03:18.666366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:36.930 [2024-11-26 18:03:18.668442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:36.930 [2024-11-26 18:03:18.668662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:36.930 [2024-11-26 18:03:18.668680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:36.930 [2024-11-26 18:03:18.668797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:36.930 [2024-11-26 18:03:18.668938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:36.930 [2024-11-26 18:03:18.668948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:36.930 [2024-11-26 18:03:18.669092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.930 "name": "raid_bdev1", 00:18:36.930 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:36.930 "strip_size_kb": 0, 00:18:36.930 "state": "online", 00:18:36.930 "raid_level": "raid1", 00:18:36.930 "superblock": true, 00:18:36.930 "num_base_bdevs": 2, 00:18:36.930 "num_base_bdevs_discovered": 2, 00:18:36.930 "num_base_bdevs_operational": 2, 00:18:36.930 "base_bdevs_list": [ 00:18:36.930 { 00:18:36.930 "name": "BaseBdev1", 00:18:36.930 "uuid": "0fb83b1d-b08c-5820-92e2-2c82d33e1023", 00:18:36.930 "is_configured": true, 00:18:36.930 "data_offset": 256, 00:18:36.930 "data_size": 7936 00:18:36.930 }, 00:18:36.930 { 00:18:36.930 "name": "BaseBdev2", 00:18:36.930 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:36.930 "is_configured": true, 00:18:36.930 "data_offset": 256, 00:18:36.930 "data_size": 7936 00:18:36.930 } 00:18:36.930 ] 00:18:36.930 }' 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.930 18:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.498 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:37.498 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.498 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.498 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:37.498 [2024-11-26 18:03:19.134067] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.498 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.498 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:37.498 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.498 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.498 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.498 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:37.498 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.498 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:37.498 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:37.499 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:37.499 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:37.499 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:37.499 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:37.499 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:37.499 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:37.499 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:37.499 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:37.499 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:37.499 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:37.499 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:37.499 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:37.759 [2024-11-26 18:03:19.469448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:37.759 /dev/nbd0 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:37.759 1+0 records in 00:18:37.759 1+0 records out 00:18:37.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588447 s, 7.0 MB/s 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:37.759 18:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:38.697 7936+0 records in 00:18:38.697 7936+0 records out 00:18:38.697 32505856 bytes (33 MB, 31 MiB) copied, 0.760029 s, 42.8 MB/s 00:18:38.697 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:38.697 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:38.697 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:38.697 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:38.697 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:38.697 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:38.697 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:38.956 [2024-11-26 18:03:20.629272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.956 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:38.956 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:38.956 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:38.956 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:38.956 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:38.956 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:38.956 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:38.956 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.957 [2024-11-26 18:03:20.654927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.957 "name": "raid_bdev1", 00:18:38.957 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:38.957 "strip_size_kb": 0, 00:18:38.957 "state": "online", 00:18:38.957 "raid_level": "raid1", 00:18:38.957 "superblock": true, 00:18:38.957 "num_base_bdevs": 2, 00:18:38.957 "num_base_bdevs_discovered": 1, 00:18:38.957 "num_base_bdevs_operational": 1, 00:18:38.957 "base_bdevs_list": [ 00:18:38.957 { 00:18:38.957 "name": null, 00:18:38.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.957 "is_configured": false, 00:18:38.957 "data_offset": 0, 00:18:38.957 "data_size": 7936 00:18:38.957 }, 00:18:38.957 { 00:18:38.957 "name": "BaseBdev2", 00:18:38.957 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:38.957 "is_configured": true, 00:18:38.957 "data_offset": 256, 00:18:38.957 "data_size": 7936 00:18:38.957 } 00:18:38.957 ] 00:18:38.957 }' 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.957 18:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.216 18:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:39.216 18:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.216 18:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.217 [2024-11-26 18:03:21.050267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.217 [2024-11-26 18:03:21.068938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:39.217 18:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.217 18:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:39.217 [2024-11-26 18:03:21.071298] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:40.597 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.597 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.597 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.597 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.597 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.597 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.597 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.597 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.597 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.597 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.597 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.597 "name": "raid_bdev1", 00:18:40.597 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:40.597 "strip_size_kb": 0, 00:18:40.597 "state": "online", 00:18:40.597 "raid_level": "raid1", 00:18:40.597 "superblock": true, 00:18:40.597 "num_base_bdevs": 2, 00:18:40.597 "num_base_bdevs_discovered": 2, 00:18:40.597 "num_base_bdevs_operational": 2, 00:18:40.597 "process": { 00:18:40.597 "type": "rebuild", 00:18:40.597 "target": "spare", 00:18:40.597 "progress": { 00:18:40.597 "blocks": 2560, 00:18:40.597 "percent": 32 00:18:40.597 } 00:18:40.597 }, 00:18:40.597 "base_bdevs_list": [ 00:18:40.597 { 00:18:40.597 "name": "spare", 00:18:40.597 "uuid": "a983cc28-6354-57e4-8c7a-d1d7f655aeb6", 00:18:40.597 "is_configured": true, 00:18:40.597 "data_offset": 256, 00:18:40.597 "data_size": 7936 00:18:40.597 }, 00:18:40.597 { 00:18:40.597 "name": "BaseBdev2", 00:18:40.597 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:40.597 "is_configured": true, 00:18:40.597 "data_offset": 256, 00:18:40.597 "data_size": 7936 00:18:40.597 } 00:18:40.597 ] 00:18:40.597 }' 00:18:40.597 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.597 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.598 [2024-11-26 18:03:22.222668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:40.598 [2024-11-26 18:03:22.277933] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:40.598 [2024-11-26 18:03:22.278042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.598 [2024-11-26 18:03:22.278061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:40.598 [2024-11-26 18:03:22.278077] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.598 "name": "raid_bdev1", 00:18:40.598 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:40.598 "strip_size_kb": 0, 00:18:40.598 "state": "online", 00:18:40.598 "raid_level": "raid1", 00:18:40.598 "superblock": true, 00:18:40.598 "num_base_bdevs": 2, 00:18:40.598 "num_base_bdevs_discovered": 1, 00:18:40.598 "num_base_bdevs_operational": 1, 00:18:40.598 "base_bdevs_list": [ 00:18:40.598 { 00:18:40.598 "name": null, 00:18:40.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.598 "is_configured": false, 00:18:40.598 "data_offset": 0, 00:18:40.598 "data_size": 7936 00:18:40.598 }, 00:18:40.598 { 00:18:40.598 "name": "BaseBdev2", 00:18:40.598 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:40.598 "is_configured": true, 00:18:40.598 "data_offset": 256, 00:18:40.598 "data_size": 7936 00:18:40.598 } 00:18:40.598 ] 00:18:40.598 }' 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.598 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.857 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:40.857 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.857 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:40.857 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:40.857 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.857 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.857 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.857 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.857 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.857 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.116 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.116 "name": "raid_bdev1", 00:18:41.116 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:41.116 "strip_size_kb": 0, 00:18:41.116 "state": "online", 00:18:41.116 "raid_level": "raid1", 00:18:41.116 "superblock": true, 00:18:41.116 "num_base_bdevs": 2, 00:18:41.116 "num_base_bdevs_discovered": 1, 00:18:41.116 "num_base_bdevs_operational": 1, 00:18:41.116 "base_bdevs_list": [ 00:18:41.116 { 00:18:41.116 "name": null, 00:18:41.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.116 "is_configured": false, 00:18:41.116 "data_offset": 0, 00:18:41.116 "data_size": 7936 00:18:41.116 }, 00:18:41.116 { 00:18:41.116 "name": "BaseBdev2", 00:18:41.116 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:41.116 "is_configured": true, 00:18:41.116 "data_offset": 256, 00:18:41.116 "data_size": 7936 00:18:41.116 } 00:18:41.116 ] 00:18:41.116 }' 00:18:41.116 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.116 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:41.116 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.116 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:41.116 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:41.116 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.116 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.116 [2024-11-26 18:03:22.816575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:41.116 [2024-11-26 18:03:22.833659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:41.116 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.117 18:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:41.117 [2024-11-26 18:03:22.835873] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:42.057 18:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.057 18:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.057 18:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.057 18:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.057 18:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.057 18:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.057 18:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.057 18:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.057 18:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.057 18:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.057 18:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.057 "name": "raid_bdev1", 00:18:42.057 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:42.057 "strip_size_kb": 0, 00:18:42.057 "state": "online", 00:18:42.057 "raid_level": "raid1", 00:18:42.057 "superblock": true, 00:18:42.057 "num_base_bdevs": 2, 00:18:42.057 "num_base_bdevs_discovered": 2, 00:18:42.057 "num_base_bdevs_operational": 2, 00:18:42.057 "process": { 00:18:42.057 "type": "rebuild", 00:18:42.057 "target": "spare", 00:18:42.057 "progress": { 00:18:42.057 "blocks": 2560, 00:18:42.057 "percent": 32 00:18:42.057 } 00:18:42.057 }, 00:18:42.057 "base_bdevs_list": [ 00:18:42.057 { 00:18:42.057 "name": "spare", 00:18:42.057 "uuid": "a983cc28-6354-57e4-8c7a-d1d7f655aeb6", 00:18:42.057 "is_configured": true, 00:18:42.057 "data_offset": 256, 00:18:42.057 "data_size": 7936 00:18:42.057 }, 00:18:42.057 { 00:18:42.057 "name": "BaseBdev2", 00:18:42.057 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:42.057 "is_configured": true, 00:18:42.057 "data_offset": 256, 00:18:42.057 "data_size": 7936 00:18:42.057 } 00:18:42.057 ] 00:18:42.058 }' 00:18:42.058 18:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.317 18:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.317 18:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:42.317 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=745 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.317 "name": "raid_bdev1", 00:18:42.317 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:42.317 "strip_size_kb": 0, 00:18:42.317 "state": "online", 00:18:42.317 "raid_level": "raid1", 00:18:42.317 "superblock": true, 00:18:42.317 "num_base_bdevs": 2, 00:18:42.317 "num_base_bdevs_discovered": 2, 00:18:42.317 "num_base_bdevs_operational": 2, 00:18:42.317 "process": { 00:18:42.317 "type": "rebuild", 00:18:42.317 "target": "spare", 00:18:42.317 "progress": { 00:18:42.317 "blocks": 2816, 00:18:42.317 "percent": 35 00:18:42.317 } 00:18:42.317 }, 00:18:42.317 "base_bdevs_list": [ 00:18:42.317 { 00:18:42.317 "name": "spare", 00:18:42.317 "uuid": "a983cc28-6354-57e4-8c7a-d1d7f655aeb6", 00:18:42.317 "is_configured": true, 00:18:42.317 "data_offset": 256, 00:18:42.317 "data_size": 7936 00:18:42.317 }, 00:18:42.317 { 00:18:42.317 "name": "BaseBdev2", 00:18:42.317 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:42.317 "is_configured": true, 00:18:42.317 "data_offset": 256, 00:18:42.317 "data_size": 7936 00:18:42.317 } 00:18:42.317 ] 00:18:42.317 }' 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.317 18:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.698 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.698 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.698 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.698 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.698 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.698 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.698 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.698 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.698 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.698 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.698 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.698 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.698 "name": "raid_bdev1", 00:18:43.698 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:43.698 "strip_size_kb": 0, 00:18:43.698 "state": "online", 00:18:43.698 "raid_level": "raid1", 00:18:43.698 "superblock": true, 00:18:43.698 "num_base_bdevs": 2, 00:18:43.698 "num_base_bdevs_discovered": 2, 00:18:43.698 "num_base_bdevs_operational": 2, 00:18:43.698 "process": { 00:18:43.698 "type": "rebuild", 00:18:43.698 "target": "spare", 00:18:43.698 "progress": { 00:18:43.698 "blocks": 5888, 00:18:43.698 "percent": 74 00:18:43.698 } 00:18:43.698 }, 00:18:43.698 "base_bdevs_list": [ 00:18:43.698 { 00:18:43.698 "name": "spare", 00:18:43.698 "uuid": "a983cc28-6354-57e4-8c7a-d1d7f655aeb6", 00:18:43.698 "is_configured": true, 00:18:43.698 "data_offset": 256, 00:18:43.698 "data_size": 7936 00:18:43.698 }, 00:18:43.698 { 00:18:43.698 "name": "BaseBdev2", 00:18:43.698 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:43.698 "is_configured": true, 00:18:43.699 "data_offset": 256, 00:18:43.699 "data_size": 7936 00:18:43.699 } 00:18:43.699 ] 00:18:43.699 }' 00:18:43.699 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.699 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.699 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.699 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.699 18:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:44.267 [2024-11-26 18:03:25.952293] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:44.267 [2024-11-26 18:03:25.952401] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:44.267 [2024-11-26 18:03:25.952554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.527 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:44.527 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.527 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.527 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.527 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.527 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.527 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.527 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.527 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.527 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.527 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.527 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.527 "name": "raid_bdev1", 00:18:44.527 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:44.527 "strip_size_kb": 0, 00:18:44.527 "state": "online", 00:18:44.527 "raid_level": "raid1", 00:18:44.527 "superblock": true, 00:18:44.527 "num_base_bdevs": 2, 00:18:44.527 "num_base_bdevs_discovered": 2, 00:18:44.527 "num_base_bdevs_operational": 2, 00:18:44.527 "base_bdevs_list": [ 00:18:44.527 { 00:18:44.527 "name": "spare", 00:18:44.527 "uuid": "a983cc28-6354-57e4-8c7a-d1d7f655aeb6", 00:18:44.527 "is_configured": true, 00:18:44.527 "data_offset": 256, 00:18:44.527 "data_size": 7936 00:18:44.527 }, 00:18:44.527 { 00:18:44.527 "name": "BaseBdev2", 00:18:44.527 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:44.527 "is_configured": true, 00:18:44.527 "data_offset": 256, 00:18:44.527 "data_size": 7936 00:18:44.527 } 00:18:44.527 ] 00:18:44.527 }' 00:18:44.527 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.527 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:44.527 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.786 "name": "raid_bdev1", 00:18:44.786 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:44.786 "strip_size_kb": 0, 00:18:44.786 "state": "online", 00:18:44.786 "raid_level": "raid1", 00:18:44.786 "superblock": true, 00:18:44.786 "num_base_bdevs": 2, 00:18:44.786 "num_base_bdevs_discovered": 2, 00:18:44.786 "num_base_bdevs_operational": 2, 00:18:44.786 "base_bdevs_list": [ 00:18:44.786 { 00:18:44.786 "name": "spare", 00:18:44.786 "uuid": "a983cc28-6354-57e4-8c7a-d1d7f655aeb6", 00:18:44.786 "is_configured": true, 00:18:44.786 "data_offset": 256, 00:18:44.786 "data_size": 7936 00:18:44.786 }, 00:18:44.786 { 00:18:44.786 "name": "BaseBdev2", 00:18:44.786 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:44.786 "is_configured": true, 00:18:44.786 "data_offset": 256, 00:18:44.786 "data_size": 7936 00:18:44.786 } 00:18:44.786 ] 00:18:44.786 }' 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.786 "name": "raid_bdev1", 00:18:44.786 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:44.786 "strip_size_kb": 0, 00:18:44.786 "state": "online", 00:18:44.786 "raid_level": "raid1", 00:18:44.786 "superblock": true, 00:18:44.786 "num_base_bdevs": 2, 00:18:44.786 "num_base_bdevs_discovered": 2, 00:18:44.786 "num_base_bdevs_operational": 2, 00:18:44.786 "base_bdevs_list": [ 00:18:44.786 { 00:18:44.786 "name": "spare", 00:18:44.786 "uuid": "a983cc28-6354-57e4-8c7a-d1d7f655aeb6", 00:18:44.786 "is_configured": true, 00:18:44.786 "data_offset": 256, 00:18:44.786 "data_size": 7936 00:18:44.786 }, 00:18:44.786 { 00:18:44.786 "name": "BaseBdev2", 00:18:44.786 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:44.786 "is_configured": true, 00:18:44.786 "data_offset": 256, 00:18:44.786 "data_size": 7936 00:18:44.786 } 00:18:44.786 ] 00:18:44.786 }' 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.786 18:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.354 [2024-11-26 18:03:27.023015] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.354 [2024-11-26 18:03:27.023070] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.354 [2024-11-26 18:03:27.023186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.354 [2024-11-26 18:03:27.023270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.354 [2024-11-26 18:03:27.023288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:45.354 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:45.614 /dev/nbd0 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:45.614 1+0 records in 00:18:45.614 1+0 records out 00:18:45.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040505 s, 10.1 MB/s 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:45.614 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:45.872 /dev/nbd1 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:45.872 1+0 records in 00:18:45.872 1+0 records out 00:18:45.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449827 s, 9.1 MB/s 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:45.872 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:46.132 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:46.132 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:46.132 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:46.132 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:46.132 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:46.132 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.132 18:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:46.392 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:46.392 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:46.392 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:46.392 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.392 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.392 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:46.392 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:46.392 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.392 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.392 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.651 [2024-11-26 18:03:28.416932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:46.651 [2024-11-26 18:03:28.417008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.651 [2024-11-26 18:03:28.417048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:46.651 [2024-11-26 18:03:28.417063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.651 [2024-11-26 18:03:28.419401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.651 [2024-11-26 18:03:28.419442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:46.651 [2024-11-26 18:03:28.419524] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:46.651 [2024-11-26 18:03:28.419595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.651 [2024-11-26 18:03:28.419772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.651 spare 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.651 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.910 [2024-11-26 18:03:28.519695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:46.910 [2024-11-26 18:03:28.519764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:46.910 [2024-11-26 18:03:28.519924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:46.910 [2024-11-26 18:03:28.520153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:46.910 [2024-11-26 18:03:28.520185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:46.910 [2024-11-26 18:03:28.520369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.910 "name": "raid_bdev1", 00:18:46.910 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:46.910 "strip_size_kb": 0, 00:18:46.910 "state": "online", 00:18:46.910 "raid_level": "raid1", 00:18:46.910 "superblock": true, 00:18:46.910 "num_base_bdevs": 2, 00:18:46.910 "num_base_bdevs_discovered": 2, 00:18:46.910 "num_base_bdevs_operational": 2, 00:18:46.910 "base_bdevs_list": [ 00:18:46.910 { 00:18:46.910 "name": "spare", 00:18:46.910 "uuid": "a983cc28-6354-57e4-8c7a-d1d7f655aeb6", 00:18:46.910 "is_configured": true, 00:18:46.910 "data_offset": 256, 00:18:46.910 "data_size": 7936 00:18:46.910 }, 00:18:46.910 { 00:18:46.910 "name": "BaseBdev2", 00:18:46.910 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:46.910 "is_configured": true, 00:18:46.910 "data_offset": 256, 00:18:46.910 "data_size": 7936 00:18:46.910 } 00:18:46.910 ] 00:18:46.910 }' 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.910 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.169 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.169 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.169 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.169 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.169 18:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.169 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.169 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.169 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.169 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.169 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.428 "name": "raid_bdev1", 00:18:47.428 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:47.428 "strip_size_kb": 0, 00:18:47.428 "state": "online", 00:18:47.428 "raid_level": "raid1", 00:18:47.428 "superblock": true, 00:18:47.428 "num_base_bdevs": 2, 00:18:47.428 "num_base_bdevs_discovered": 2, 00:18:47.428 "num_base_bdevs_operational": 2, 00:18:47.428 "base_bdevs_list": [ 00:18:47.428 { 00:18:47.428 "name": "spare", 00:18:47.428 "uuid": "a983cc28-6354-57e4-8c7a-d1d7f655aeb6", 00:18:47.428 "is_configured": true, 00:18:47.428 "data_offset": 256, 00:18:47.428 "data_size": 7936 00:18:47.428 }, 00:18:47.428 { 00:18:47.428 "name": "BaseBdev2", 00:18:47.428 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:47.428 "is_configured": true, 00:18:47.428 "data_offset": 256, 00:18:47.428 "data_size": 7936 00:18:47.428 } 00:18:47.428 ] 00:18:47.428 }' 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.428 [2024-11-26 18:03:29.191697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.428 "name": "raid_bdev1", 00:18:47.428 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:47.428 "strip_size_kb": 0, 00:18:47.428 "state": "online", 00:18:47.428 "raid_level": "raid1", 00:18:47.428 "superblock": true, 00:18:47.428 "num_base_bdevs": 2, 00:18:47.428 "num_base_bdevs_discovered": 1, 00:18:47.428 "num_base_bdevs_operational": 1, 00:18:47.428 "base_bdevs_list": [ 00:18:47.428 { 00:18:47.428 "name": null, 00:18:47.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.428 "is_configured": false, 00:18:47.428 "data_offset": 0, 00:18:47.428 "data_size": 7936 00:18:47.428 }, 00:18:47.428 { 00:18:47.428 "name": "BaseBdev2", 00:18:47.428 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:47.428 "is_configured": true, 00:18:47.428 "data_offset": 256, 00:18:47.428 "data_size": 7936 00:18:47.428 } 00:18:47.428 ] 00:18:47.428 }' 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.428 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.995 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:47.995 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.995 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.995 [2024-11-26 18:03:29.674912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.995 [2024-11-26 18:03:29.675172] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:47.995 [2024-11-26 18:03:29.675197] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:47.995 [2024-11-26 18:03:29.675240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.995 [2024-11-26 18:03:29.692801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:47.995 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.995 18:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:47.995 [2024-11-26 18:03:29.695089] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:48.934 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.934 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.934 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.934 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.934 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.934 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.934 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.934 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.934 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.934 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.935 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.935 "name": "raid_bdev1", 00:18:48.935 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:48.935 "strip_size_kb": 0, 00:18:48.935 "state": "online", 00:18:48.935 "raid_level": "raid1", 00:18:48.935 "superblock": true, 00:18:48.935 "num_base_bdevs": 2, 00:18:48.935 "num_base_bdevs_discovered": 2, 00:18:48.935 "num_base_bdevs_operational": 2, 00:18:48.935 "process": { 00:18:48.935 "type": "rebuild", 00:18:48.935 "target": "spare", 00:18:48.935 "progress": { 00:18:48.935 "blocks": 2560, 00:18:48.935 "percent": 32 00:18:48.935 } 00:18:48.935 }, 00:18:48.935 "base_bdevs_list": [ 00:18:48.935 { 00:18:48.935 "name": "spare", 00:18:48.935 "uuid": "a983cc28-6354-57e4-8c7a-d1d7f655aeb6", 00:18:48.935 "is_configured": true, 00:18:48.935 "data_offset": 256, 00:18:48.935 "data_size": 7936 00:18:48.935 }, 00:18:48.935 { 00:18:48.935 "name": "BaseBdev2", 00:18:48.935 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:48.935 "is_configured": true, 00:18:48.935 "data_offset": 256, 00:18:48.935 "data_size": 7936 00:18:48.935 } 00:18:48.935 ] 00:18:48.935 }' 00:18:48.935 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.194 [2024-11-26 18:03:30.847077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.194 [2024-11-26 18:03:30.901427] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:49.194 [2024-11-26 18:03:30.901530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.194 [2024-11-26 18:03:30.901558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.194 [2024-11-26 18:03:30.901584] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.194 "name": "raid_bdev1", 00:18:49.194 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:49.194 "strip_size_kb": 0, 00:18:49.194 "state": "online", 00:18:49.194 "raid_level": "raid1", 00:18:49.194 "superblock": true, 00:18:49.194 "num_base_bdevs": 2, 00:18:49.194 "num_base_bdevs_discovered": 1, 00:18:49.194 "num_base_bdevs_operational": 1, 00:18:49.194 "base_bdevs_list": [ 00:18:49.194 { 00:18:49.194 "name": null, 00:18:49.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.194 "is_configured": false, 00:18:49.194 "data_offset": 0, 00:18:49.194 "data_size": 7936 00:18:49.194 }, 00:18:49.194 { 00:18:49.194 "name": "BaseBdev2", 00:18:49.194 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:49.194 "is_configured": true, 00:18:49.194 "data_offset": 256, 00:18:49.194 "data_size": 7936 00:18:49.194 } 00:18:49.194 ] 00:18:49.194 }' 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.194 18:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.761 18:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:49.761 18:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.761 18:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.761 [2024-11-26 18:03:31.408085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:49.761 [2024-11-26 18:03:31.408177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.761 [2024-11-26 18:03:31.408208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:49.761 [2024-11-26 18:03:31.408221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.761 [2024-11-26 18:03:31.408538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.761 [2024-11-26 18:03:31.408568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:49.761 [2024-11-26 18:03:31.408642] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:49.761 [2024-11-26 18:03:31.408665] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:49.761 [2024-11-26 18:03:31.408677] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:49.761 [2024-11-26 18:03:31.408701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.761 [2024-11-26 18:03:31.425830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:49.761 spare 00:18:49.761 18:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.761 18:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:49.761 [2024-11-26 18:03:31.428105] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.698 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.698 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.698 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.698 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.698 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.698 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.698 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.698 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.698 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.698 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.698 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.698 "name": "raid_bdev1", 00:18:50.698 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:50.698 "strip_size_kb": 0, 00:18:50.698 "state": "online", 00:18:50.698 "raid_level": "raid1", 00:18:50.698 "superblock": true, 00:18:50.698 "num_base_bdevs": 2, 00:18:50.698 "num_base_bdevs_discovered": 2, 00:18:50.698 "num_base_bdevs_operational": 2, 00:18:50.698 "process": { 00:18:50.698 "type": "rebuild", 00:18:50.698 "target": "spare", 00:18:50.698 "progress": { 00:18:50.698 "blocks": 2560, 00:18:50.698 "percent": 32 00:18:50.698 } 00:18:50.698 }, 00:18:50.698 "base_bdevs_list": [ 00:18:50.698 { 00:18:50.698 "name": "spare", 00:18:50.698 "uuid": "a983cc28-6354-57e4-8c7a-d1d7f655aeb6", 00:18:50.698 "is_configured": true, 00:18:50.698 "data_offset": 256, 00:18:50.698 "data_size": 7936 00:18:50.698 }, 00:18:50.698 { 00:18:50.698 "name": "BaseBdev2", 00:18:50.698 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:50.698 "is_configured": true, 00:18:50.698 "data_offset": 256, 00:18:50.698 "data_size": 7936 00:18:50.698 } 00:18:50.698 ] 00:18:50.698 }' 00:18:50.698 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.698 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.698 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.957 [2024-11-26 18:03:32.591475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.957 [2024-11-26 18:03:32.634594] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:50.957 [2024-11-26 18:03:32.634692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.957 [2024-11-26 18:03:32.634714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.957 [2024-11-26 18:03:32.634723] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.957 "name": "raid_bdev1", 00:18:50.957 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:50.957 "strip_size_kb": 0, 00:18:50.957 "state": "online", 00:18:50.957 "raid_level": "raid1", 00:18:50.957 "superblock": true, 00:18:50.957 "num_base_bdevs": 2, 00:18:50.957 "num_base_bdevs_discovered": 1, 00:18:50.957 "num_base_bdevs_operational": 1, 00:18:50.957 "base_bdevs_list": [ 00:18:50.957 { 00:18:50.957 "name": null, 00:18:50.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.957 "is_configured": false, 00:18:50.957 "data_offset": 0, 00:18:50.957 "data_size": 7936 00:18:50.957 }, 00:18:50.957 { 00:18:50.957 "name": "BaseBdev2", 00:18:50.957 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:50.957 "is_configured": true, 00:18:50.957 "data_offset": 256, 00:18:50.957 "data_size": 7936 00:18:50.957 } 00:18:50.957 ] 00:18:50.957 }' 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.957 18:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.526 "name": "raid_bdev1", 00:18:51.526 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:51.526 "strip_size_kb": 0, 00:18:51.526 "state": "online", 00:18:51.526 "raid_level": "raid1", 00:18:51.526 "superblock": true, 00:18:51.526 "num_base_bdevs": 2, 00:18:51.526 "num_base_bdevs_discovered": 1, 00:18:51.526 "num_base_bdevs_operational": 1, 00:18:51.526 "base_bdevs_list": [ 00:18:51.526 { 00:18:51.526 "name": null, 00:18:51.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.526 "is_configured": false, 00:18:51.526 "data_offset": 0, 00:18:51.526 "data_size": 7936 00:18:51.526 }, 00:18:51.526 { 00:18:51.526 "name": "BaseBdev2", 00:18:51.526 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:51.526 "is_configured": true, 00:18:51.526 "data_offset": 256, 00:18:51.526 "data_size": 7936 00:18:51.526 } 00:18:51.526 ] 00:18:51.526 }' 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.526 [2024-11-26 18:03:33.253735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:51.526 [2024-11-26 18:03:33.253835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.526 [2024-11-26 18:03:33.253882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:51.526 [2024-11-26 18:03:33.253907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.526 [2024-11-26 18:03:33.254282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.526 [2024-11-26 18:03:33.254323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:51.526 [2024-11-26 18:03:33.254416] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:51.526 [2024-11-26 18:03:33.254447] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:51.526 [2024-11-26 18:03:33.254465] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:51.526 [2024-11-26 18:03:33.254482] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:51.526 BaseBdev1 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.526 18:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.464 "name": "raid_bdev1", 00:18:52.464 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:52.464 "strip_size_kb": 0, 00:18:52.464 "state": "online", 00:18:52.464 "raid_level": "raid1", 00:18:52.464 "superblock": true, 00:18:52.464 "num_base_bdevs": 2, 00:18:52.464 "num_base_bdevs_discovered": 1, 00:18:52.464 "num_base_bdevs_operational": 1, 00:18:52.464 "base_bdevs_list": [ 00:18:52.464 { 00:18:52.464 "name": null, 00:18:52.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.464 "is_configured": false, 00:18:52.464 "data_offset": 0, 00:18:52.464 "data_size": 7936 00:18:52.464 }, 00:18:52.464 { 00:18:52.464 "name": "BaseBdev2", 00:18:52.464 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:52.464 "is_configured": true, 00:18:52.464 "data_offset": 256, 00:18:52.464 "data_size": 7936 00:18:52.464 } 00:18:52.464 ] 00:18:52.464 }' 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.464 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.032 "name": "raid_bdev1", 00:18:53.032 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:53.032 "strip_size_kb": 0, 00:18:53.032 "state": "online", 00:18:53.032 "raid_level": "raid1", 00:18:53.032 "superblock": true, 00:18:53.032 "num_base_bdevs": 2, 00:18:53.032 "num_base_bdevs_discovered": 1, 00:18:53.032 "num_base_bdevs_operational": 1, 00:18:53.032 "base_bdevs_list": [ 00:18:53.032 { 00:18:53.032 "name": null, 00:18:53.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.032 "is_configured": false, 00:18:53.032 "data_offset": 0, 00:18:53.032 "data_size": 7936 00:18:53.032 }, 00:18:53.032 { 00:18:53.032 "name": "BaseBdev2", 00:18:53.032 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:53.032 "is_configured": true, 00:18:53.032 "data_offset": 256, 00:18:53.032 "data_size": 7936 00:18:53.032 } 00:18:53.032 ] 00:18:53.032 }' 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.032 [2024-11-26 18:03:34.835288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:53.032 [2024-11-26 18:03:34.835501] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:53.032 [2024-11-26 18:03:34.835526] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:53.032 request: 00:18:53.032 { 00:18:53.032 "base_bdev": "BaseBdev1", 00:18:53.032 "raid_bdev": "raid_bdev1", 00:18:53.032 "method": "bdev_raid_add_base_bdev", 00:18:53.032 "req_id": 1 00:18:53.032 } 00:18:53.032 Got JSON-RPC error response 00:18:53.032 response: 00:18:53.032 { 00:18:53.032 "code": -22, 00:18:53.032 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:53.032 } 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.032 18:03:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:54.017 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:54.017 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.017 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.017 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.017 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.017 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:54.017 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.017 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.017 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.017 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.017 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.017 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.017 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.017 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.017 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.277 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.277 "name": "raid_bdev1", 00:18:54.277 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:54.277 "strip_size_kb": 0, 00:18:54.277 "state": "online", 00:18:54.277 "raid_level": "raid1", 00:18:54.277 "superblock": true, 00:18:54.277 "num_base_bdevs": 2, 00:18:54.277 "num_base_bdevs_discovered": 1, 00:18:54.277 "num_base_bdevs_operational": 1, 00:18:54.277 "base_bdevs_list": [ 00:18:54.277 { 00:18:54.277 "name": null, 00:18:54.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.277 "is_configured": false, 00:18:54.277 "data_offset": 0, 00:18:54.277 "data_size": 7936 00:18:54.277 }, 00:18:54.277 { 00:18:54.277 "name": "BaseBdev2", 00:18:54.277 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:54.277 "is_configured": true, 00:18:54.277 "data_offset": 256, 00:18:54.277 "data_size": 7936 00:18:54.277 } 00:18:54.277 ] 00:18:54.277 }' 00:18:54.277 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.277 18:03:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.537 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.537 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.537 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.537 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.537 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.537 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.537 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.537 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.537 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.537 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.537 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.537 "name": "raid_bdev1", 00:18:54.537 "uuid": "cfdf48fe-786d-4115-892c-292a4ffb88f6", 00:18:54.537 "strip_size_kb": 0, 00:18:54.537 "state": "online", 00:18:54.538 "raid_level": "raid1", 00:18:54.538 "superblock": true, 00:18:54.538 "num_base_bdevs": 2, 00:18:54.538 "num_base_bdevs_discovered": 1, 00:18:54.538 "num_base_bdevs_operational": 1, 00:18:54.538 "base_bdevs_list": [ 00:18:54.538 { 00:18:54.538 "name": null, 00:18:54.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.538 "is_configured": false, 00:18:54.538 "data_offset": 0, 00:18:54.538 "data_size": 7936 00:18:54.538 }, 00:18:54.538 { 00:18:54.538 "name": "BaseBdev2", 00:18:54.538 "uuid": "f1c918a1-c6cd-5b23-a2e7-9bb126bd93e3", 00:18:54.538 "is_configured": true, 00:18:54.538 "data_offset": 256, 00:18:54.538 "data_size": 7936 00:18:54.538 } 00:18:54.538 ] 00:18:54.538 }' 00:18:54.538 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.538 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.538 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.538 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.538 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88225 00:18:54.538 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88225 ']' 00:18:54.538 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88225 00:18:54.798 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:54.798 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.798 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88225 00:18:54.798 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:54.798 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:54.798 killing process with pid 88225 00:18:54.798 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88225' 00:18:54.798 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88225 00:18:54.798 Received shutdown signal, test time was about 60.000000 seconds 00:18:54.798 00:18:54.798 Latency(us) 00:18:54.798 [2024-11-26T18:03:36.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.798 [2024-11-26T18:03:36.661Z] =================================================================================================================== 00:18:54.798 [2024-11-26T18:03:36.661Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:54.798 [2024-11-26 18:03:36.430326] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:54.798 18:03:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88225 00:18:54.798 [2024-11-26 18:03:36.430504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.798 [2024-11-26 18:03:36.430566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:54.798 [2024-11-26 18:03:36.430581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:55.058 [2024-11-26 18:03:36.818613] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:56.439 18:03:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:56.439 00:18:56.439 real 0m20.683s 00:18:56.439 user 0m27.045s 00:18:56.439 sys 0m2.635s 00:18:56.439 18:03:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.439 18:03:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.439 ************************************ 00:18:56.439 END TEST raid_rebuild_test_sb_md_separate 00:18:56.439 ************************************ 00:18:56.439 18:03:38 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:56.439 18:03:38 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:56.439 18:03:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:56.439 18:03:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.439 18:03:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:56.439 ************************************ 00:18:56.439 START TEST raid_state_function_test_sb_md_interleaved 00:18:56.439 ************************************ 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:56.439 Process raid pid: 88920 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88920 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88920' 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88920 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88920 ']' 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.439 18:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.699 [2024-11-26 18:03:38.306121] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:18:56.699 [2024-11-26 18:03:38.306252] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.699 [2024-11-26 18:03:38.487541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.958 [2024-11-26 18:03:38.628972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.217 [2024-11-26 18:03:38.884686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.217 [2024-11-26 18:03:38.884760] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.484 [2024-11-26 18:03:39.247526] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:57.484 [2024-11-26 18:03:39.247590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:57.484 [2024-11-26 18:03:39.247603] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.484 [2024-11-26 18:03:39.247614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.484 "name": "Existed_Raid", 00:18:57.484 "uuid": "159b9c9e-9870-4c9f-8e7c-317e83c2a0e8", 00:18:57.484 "strip_size_kb": 0, 00:18:57.484 "state": "configuring", 00:18:57.484 "raid_level": "raid1", 00:18:57.484 "superblock": true, 00:18:57.484 "num_base_bdevs": 2, 00:18:57.484 "num_base_bdevs_discovered": 0, 00:18:57.484 "num_base_bdevs_operational": 2, 00:18:57.484 "base_bdevs_list": [ 00:18:57.484 { 00:18:57.484 "name": "BaseBdev1", 00:18:57.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.484 "is_configured": false, 00:18:57.484 "data_offset": 0, 00:18:57.484 "data_size": 0 00:18:57.484 }, 00:18:57.484 { 00:18:57.484 "name": "BaseBdev2", 00:18:57.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.484 "is_configured": false, 00:18:57.484 "data_offset": 0, 00:18:57.484 "data_size": 0 00:18:57.484 } 00:18:57.484 ] 00:18:57.484 }' 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.484 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.108 [2024-11-26 18:03:39.738641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:58.108 [2024-11-26 18:03:39.738710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.108 [2024-11-26 18:03:39.746648] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:58.108 [2024-11-26 18:03:39.746699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:58.108 [2024-11-26 18:03:39.746711] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.108 [2024-11-26 18:03:39.746725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.108 [2024-11-26 18:03:39.799542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.108 BaseBdev1 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.108 [ 00:18:58.108 { 00:18:58.108 "name": "BaseBdev1", 00:18:58.108 "aliases": [ 00:18:58.108 "13182132-7c7e-4d66-aee7-a30158ad7cf8" 00:18:58.108 ], 00:18:58.108 "product_name": "Malloc disk", 00:18:58.108 "block_size": 4128, 00:18:58.108 "num_blocks": 8192, 00:18:58.108 "uuid": "13182132-7c7e-4d66-aee7-a30158ad7cf8", 00:18:58.108 "md_size": 32, 00:18:58.108 "md_interleave": true, 00:18:58.108 "dif_type": 0, 00:18:58.108 "assigned_rate_limits": { 00:18:58.108 "rw_ios_per_sec": 0, 00:18:58.108 "rw_mbytes_per_sec": 0, 00:18:58.108 "r_mbytes_per_sec": 0, 00:18:58.108 "w_mbytes_per_sec": 0 00:18:58.108 }, 00:18:58.108 "claimed": true, 00:18:58.108 "claim_type": "exclusive_write", 00:18:58.108 "zoned": false, 00:18:58.108 "supported_io_types": { 00:18:58.108 "read": true, 00:18:58.108 "write": true, 00:18:58.108 "unmap": true, 00:18:58.108 "flush": true, 00:18:58.108 "reset": true, 00:18:58.108 "nvme_admin": false, 00:18:58.108 "nvme_io": false, 00:18:58.108 "nvme_io_md": false, 00:18:58.108 "write_zeroes": true, 00:18:58.108 "zcopy": true, 00:18:58.108 "get_zone_info": false, 00:18:58.108 "zone_management": false, 00:18:58.108 "zone_append": false, 00:18:58.108 "compare": false, 00:18:58.108 "compare_and_write": false, 00:18:58.108 "abort": true, 00:18:58.108 "seek_hole": false, 00:18:58.108 "seek_data": false, 00:18:58.108 "copy": true, 00:18:58.108 "nvme_iov_md": false 00:18:58.108 }, 00:18:58.108 "memory_domains": [ 00:18:58.108 { 00:18:58.108 "dma_device_id": "system", 00:18:58.108 "dma_device_type": 1 00:18:58.108 }, 00:18:58.108 { 00:18:58.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.108 "dma_device_type": 2 00:18:58.108 } 00:18:58.108 ], 00:18:58.108 "driver_specific": {} 00:18:58.108 } 00:18:58.108 ] 00:18:58.108 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.109 "name": "Existed_Raid", 00:18:58.109 "uuid": "0bc08cf2-2618-4c0d-9bd7-e94cc12b37a1", 00:18:58.109 "strip_size_kb": 0, 00:18:58.109 "state": "configuring", 00:18:58.109 "raid_level": "raid1", 00:18:58.109 "superblock": true, 00:18:58.109 "num_base_bdevs": 2, 00:18:58.109 "num_base_bdevs_discovered": 1, 00:18:58.109 "num_base_bdevs_operational": 2, 00:18:58.109 "base_bdevs_list": [ 00:18:58.109 { 00:18:58.109 "name": "BaseBdev1", 00:18:58.109 "uuid": "13182132-7c7e-4d66-aee7-a30158ad7cf8", 00:18:58.109 "is_configured": true, 00:18:58.109 "data_offset": 256, 00:18:58.109 "data_size": 7936 00:18:58.109 }, 00:18:58.109 { 00:18:58.109 "name": "BaseBdev2", 00:18:58.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.109 "is_configured": false, 00:18:58.109 "data_offset": 0, 00:18:58.109 "data_size": 0 00:18:58.109 } 00:18:58.109 ] 00:18:58.109 }' 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.109 18:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.678 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:58.678 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.678 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.678 [2024-11-26 18:03:40.242922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:58.678 [2024-11-26 18:03:40.243006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:58.678 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.678 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:58.678 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.678 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.678 [2024-11-26 18:03:40.254987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.678 [2024-11-26 18:03:40.257198] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.678 [2024-11-26 18:03:40.257251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.678 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.678 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.679 "name": "Existed_Raid", 00:18:58.679 "uuid": "4d59cb49-2907-4ea6-a05a-3c51031c90de", 00:18:58.679 "strip_size_kb": 0, 00:18:58.679 "state": "configuring", 00:18:58.679 "raid_level": "raid1", 00:18:58.679 "superblock": true, 00:18:58.679 "num_base_bdevs": 2, 00:18:58.679 "num_base_bdevs_discovered": 1, 00:18:58.679 "num_base_bdevs_operational": 2, 00:18:58.679 "base_bdevs_list": [ 00:18:58.679 { 00:18:58.679 "name": "BaseBdev1", 00:18:58.679 "uuid": "13182132-7c7e-4d66-aee7-a30158ad7cf8", 00:18:58.679 "is_configured": true, 00:18:58.679 "data_offset": 256, 00:18:58.679 "data_size": 7936 00:18:58.679 }, 00:18:58.679 { 00:18:58.679 "name": "BaseBdev2", 00:18:58.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.679 "is_configured": false, 00:18:58.679 "data_offset": 0, 00:18:58.679 "data_size": 0 00:18:58.679 } 00:18:58.679 ] 00:18:58.679 }' 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.679 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.938 [2024-11-26 18:03:40.782003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:58.938 [2024-11-26 18:03:40.782296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:58.938 [2024-11-26 18:03:40.782316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:58.938 [2024-11-26 18:03:40.782422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:58.938 [2024-11-26 18:03:40.782509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:58.938 [2024-11-26 18:03:40.782522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:58.938 [2024-11-26 18:03:40.782611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.938 BaseBdev2 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.938 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.196 [ 00:18:59.196 { 00:18:59.196 "name": "BaseBdev2", 00:18:59.197 "aliases": [ 00:18:59.197 "74a253db-b745-4d66-b427-f077d5da7bf0" 00:18:59.197 ], 00:18:59.197 "product_name": "Malloc disk", 00:18:59.197 "block_size": 4128, 00:18:59.197 "num_blocks": 8192, 00:18:59.197 "uuid": "74a253db-b745-4d66-b427-f077d5da7bf0", 00:18:59.197 "md_size": 32, 00:18:59.197 "md_interleave": true, 00:18:59.197 "dif_type": 0, 00:18:59.197 "assigned_rate_limits": { 00:18:59.197 "rw_ios_per_sec": 0, 00:18:59.197 "rw_mbytes_per_sec": 0, 00:18:59.197 "r_mbytes_per_sec": 0, 00:18:59.197 "w_mbytes_per_sec": 0 00:18:59.197 }, 00:18:59.197 "claimed": true, 00:18:59.197 "claim_type": "exclusive_write", 00:18:59.197 "zoned": false, 00:18:59.197 "supported_io_types": { 00:18:59.197 "read": true, 00:18:59.197 "write": true, 00:18:59.197 "unmap": true, 00:18:59.197 "flush": true, 00:18:59.197 "reset": true, 00:18:59.197 "nvme_admin": false, 00:18:59.197 "nvme_io": false, 00:18:59.197 "nvme_io_md": false, 00:18:59.197 "write_zeroes": true, 00:18:59.197 "zcopy": true, 00:18:59.197 "get_zone_info": false, 00:18:59.197 "zone_management": false, 00:18:59.197 "zone_append": false, 00:18:59.197 "compare": false, 00:18:59.197 "compare_and_write": false, 00:18:59.197 "abort": true, 00:18:59.197 "seek_hole": false, 00:18:59.197 "seek_data": false, 00:18:59.197 "copy": true, 00:18:59.197 "nvme_iov_md": false 00:18:59.197 }, 00:18:59.197 "memory_domains": [ 00:18:59.197 { 00:18:59.197 "dma_device_id": "system", 00:18:59.197 "dma_device_type": 1 00:18:59.197 }, 00:18:59.197 { 00:18:59.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.197 "dma_device_type": 2 00:18:59.197 } 00:18:59.197 ], 00:18:59.197 "driver_specific": {} 00:18:59.197 } 00:18:59.197 ] 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.197 "name": "Existed_Raid", 00:18:59.197 "uuid": "4d59cb49-2907-4ea6-a05a-3c51031c90de", 00:18:59.197 "strip_size_kb": 0, 00:18:59.197 "state": "online", 00:18:59.197 "raid_level": "raid1", 00:18:59.197 "superblock": true, 00:18:59.197 "num_base_bdevs": 2, 00:18:59.197 "num_base_bdevs_discovered": 2, 00:18:59.197 "num_base_bdevs_operational": 2, 00:18:59.197 "base_bdevs_list": [ 00:18:59.197 { 00:18:59.197 "name": "BaseBdev1", 00:18:59.197 "uuid": "13182132-7c7e-4d66-aee7-a30158ad7cf8", 00:18:59.197 "is_configured": true, 00:18:59.197 "data_offset": 256, 00:18:59.197 "data_size": 7936 00:18:59.197 }, 00:18:59.197 { 00:18:59.197 "name": "BaseBdev2", 00:18:59.197 "uuid": "74a253db-b745-4d66-b427-f077d5da7bf0", 00:18:59.197 "is_configured": true, 00:18:59.197 "data_offset": 256, 00:18:59.197 "data_size": 7936 00:18:59.197 } 00:18:59.197 ] 00:18:59.197 }' 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.197 18:03:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.456 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:59.456 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:59.456 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:59.456 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:59.456 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:59.456 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:59.456 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:59.456 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.456 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.456 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:59.456 [2024-11-26 18:03:41.257726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.456 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.456 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:59.456 "name": "Existed_Raid", 00:18:59.456 "aliases": [ 00:18:59.456 "4d59cb49-2907-4ea6-a05a-3c51031c90de" 00:18:59.456 ], 00:18:59.456 "product_name": "Raid Volume", 00:18:59.456 "block_size": 4128, 00:18:59.456 "num_blocks": 7936, 00:18:59.456 "uuid": "4d59cb49-2907-4ea6-a05a-3c51031c90de", 00:18:59.456 "md_size": 32, 00:18:59.456 "md_interleave": true, 00:18:59.456 "dif_type": 0, 00:18:59.456 "assigned_rate_limits": { 00:18:59.456 "rw_ios_per_sec": 0, 00:18:59.456 "rw_mbytes_per_sec": 0, 00:18:59.456 "r_mbytes_per_sec": 0, 00:18:59.456 "w_mbytes_per_sec": 0 00:18:59.456 }, 00:18:59.456 "claimed": false, 00:18:59.456 "zoned": false, 00:18:59.456 "supported_io_types": { 00:18:59.456 "read": true, 00:18:59.456 "write": true, 00:18:59.456 "unmap": false, 00:18:59.456 "flush": false, 00:18:59.456 "reset": true, 00:18:59.456 "nvme_admin": false, 00:18:59.456 "nvme_io": false, 00:18:59.456 "nvme_io_md": false, 00:18:59.456 "write_zeroes": true, 00:18:59.456 "zcopy": false, 00:18:59.456 "get_zone_info": false, 00:18:59.456 "zone_management": false, 00:18:59.456 "zone_append": false, 00:18:59.456 "compare": false, 00:18:59.456 "compare_and_write": false, 00:18:59.456 "abort": false, 00:18:59.456 "seek_hole": false, 00:18:59.456 "seek_data": false, 00:18:59.456 "copy": false, 00:18:59.456 "nvme_iov_md": false 00:18:59.456 }, 00:18:59.456 "memory_domains": [ 00:18:59.456 { 00:18:59.456 "dma_device_id": "system", 00:18:59.456 "dma_device_type": 1 00:18:59.456 }, 00:18:59.456 { 00:18:59.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.456 "dma_device_type": 2 00:18:59.456 }, 00:18:59.456 { 00:18:59.456 "dma_device_id": "system", 00:18:59.456 "dma_device_type": 1 00:18:59.456 }, 00:18:59.456 { 00:18:59.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.456 "dma_device_type": 2 00:18:59.456 } 00:18:59.456 ], 00:18:59.456 "driver_specific": { 00:18:59.456 "raid": { 00:18:59.456 "uuid": "4d59cb49-2907-4ea6-a05a-3c51031c90de", 00:18:59.456 "strip_size_kb": 0, 00:18:59.456 "state": "online", 00:18:59.456 "raid_level": "raid1", 00:18:59.456 "superblock": true, 00:18:59.456 "num_base_bdevs": 2, 00:18:59.456 "num_base_bdevs_discovered": 2, 00:18:59.456 "num_base_bdevs_operational": 2, 00:18:59.456 "base_bdevs_list": [ 00:18:59.456 { 00:18:59.456 "name": "BaseBdev1", 00:18:59.456 "uuid": "13182132-7c7e-4d66-aee7-a30158ad7cf8", 00:18:59.456 "is_configured": true, 00:18:59.456 "data_offset": 256, 00:18:59.456 "data_size": 7936 00:18:59.456 }, 00:18:59.456 { 00:18:59.456 "name": "BaseBdev2", 00:18:59.456 "uuid": "74a253db-b745-4d66-b427-f077d5da7bf0", 00:18:59.456 "is_configured": true, 00:18:59.456 "data_offset": 256, 00:18:59.456 "data_size": 7936 00:18:59.456 } 00:18:59.456 ] 00:18:59.456 } 00:18:59.456 } 00:18:59.456 }' 00:18:59.456 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:59.716 BaseBdev2' 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.716 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.716 [2024-11-26 18:03:41.509007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.976 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.976 "name": "Existed_Raid", 00:18:59.976 "uuid": "4d59cb49-2907-4ea6-a05a-3c51031c90de", 00:18:59.976 "strip_size_kb": 0, 00:18:59.976 "state": "online", 00:18:59.976 "raid_level": "raid1", 00:18:59.976 "superblock": true, 00:18:59.976 "num_base_bdevs": 2, 00:18:59.976 "num_base_bdevs_discovered": 1, 00:18:59.976 "num_base_bdevs_operational": 1, 00:18:59.976 "base_bdevs_list": [ 00:18:59.976 { 00:18:59.976 "name": null, 00:18:59.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.976 "is_configured": false, 00:18:59.976 "data_offset": 0, 00:18:59.977 "data_size": 7936 00:18:59.977 }, 00:18:59.977 { 00:18:59.977 "name": "BaseBdev2", 00:18:59.977 "uuid": "74a253db-b745-4d66-b427-f077d5da7bf0", 00:18:59.977 "is_configured": true, 00:18:59.977 "data_offset": 256, 00:18:59.977 "data_size": 7936 00:18:59.977 } 00:18:59.977 ] 00:18:59.977 }' 00:18:59.977 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.977 18:03:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.235 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:00.235 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:00.235 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.235 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.235 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.235 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:00.235 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.495 [2024-11-26 18:03:42.106191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:00.495 [2024-11-26 18:03:42.106404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:00.495 [2024-11-26 18:03:42.213468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.495 [2024-11-26 18:03:42.213662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.495 [2024-11-26 18:03:42.213722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88920 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88920 ']' 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88920 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88920 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88920' 00:19:00.495 killing process with pid 88920 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88920 00:19:00.495 [2024-11-26 18:03:42.298020] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:00.495 18:03:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88920 00:19:00.495 [2024-11-26 18:03:42.316920] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:01.914 18:03:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:01.914 00:19:01.914 real 0m5.442s 00:19:01.914 user 0m7.805s 00:19:01.914 sys 0m0.820s 00:19:01.914 ************************************ 00:19:01.914 END TEST raid_state_function_test_sb_md_interleaved 00:19:01.914 ************************************ 00:19:01.914 18:03:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.914 18:03:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.914 18:03:43 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:01.914 18:03:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:01.914 18:03:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.914 18:03:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.914 ************************************ 00:19:01.915 START TEST raid_superblock_test_md_interleaved 00:19:01.915 ************************************ 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89169 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89169 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89169 ']' 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.915 18:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.173 [2024-11-26 18:03:43.806246] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:19:02.173 [2024-11-26 18:03:43.806508] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89169 ] 00:19:02.173 [2024-11-26 18:03:43.981361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.432 [2024-11-26 18:03:44.112251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.691 [2024-11-26 18:03:44.323229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.691 [2024-11-26 18:03:44.323396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.951 malloc1 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.951 [2024-11-26 18:03:44.732201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:02.951 [2024-11-26 18:03:44.732346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.951 [2024-11-26 18:03:44.732413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:02.951 [2024-11-26 18:03:44.732457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.951 [2024-11-26 18:03:44.734674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.951 [2024-11-26 18:03:44.734768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:02.951 pt1 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.951 malloc2 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.951 [2024-11-26 18:03:44.793947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:02.951 [2024-11-26 18:03:44.794034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.951 [2024-11-26 18:03:44.794063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:02.951 [2024-11-26 18:03:44.794073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.951 [2024-11-26 18:03:44.796161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.951 [2024-11-26 18:03:44.796209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:02.951 pt2 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.951 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.951 [2024-11-26 18:03:44.805991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:02.951 [2024-11-26 18:03:44.807993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.951 [2024-11-26 18:03:44.808328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:02.951 [2024-11-26 18:03:44.808352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:02.951 [2024-11-26 18:03:44.808460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:02.951 [2024-11-26 18:03:44.808553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:02.951 [2024-11-26 18:03:44.808569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:02.951 [2024-11-26 18:03:44.808663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.210 "name": "raid_bdev1", 00:19:03.210 "uuid": "3aa0d8ab-6307-4362-9bfa-deb2c9ea400b", 00:19:03.210 "strip_size_kb": 0, 00:19:03.210 "state": "online", 00:19:03.210 "raid_level": "raid1", 00:19:03.210 "superblock": true, 00:19:03.210 "num_base_bdevs": 2, 00:19:03.210 "num_base_bdevs_discovered": 2, 00:19:03.210 "num_base_bdevs_operational": 2, 00:19:03.210 "base_bdevs_list": [ 00:19:03.210 { 00:19:03.210 "name": "pt1", 00:19:03.210 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.210 "is_configured": true, 00:19:03.210 "data_offset": 256, 00:19:03.210 "data_size": 7936 00:19:03.210 }, 00:19:03.210 { 00:19:03.210 "name": "pt2", 00:19:03.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.210 "is_configured": true, 00:19:03.210 "data_offset": 256, 00:19:03.210 "data_size": 7936 00:19:03.210 } 00:19:03.210 ] 00:19:03.210 }' 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.210 18:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.469 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:03.469 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:03.469 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:03.469 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:03.469 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:03.469 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:03.469 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.469 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:03.469 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.469 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.469 [2024-11-26 18:03:45.281503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.469 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.469 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:03.469 "name": "raid_bdev1", 00:19:03.469 "aliases": [ 00:19:03.469 "3aa0d8ab-6307-4362-9bfa-deb2c9ea400b" 00:19:03.469 ], 00:19:03.469 "product_name": "Raid Volume", 00:19:03.469 "block_size": 4128, 00:19:03.469 "num_blocks": 7936, 00:19:03.469 "uuid": "3aa0d8ab-6307-4362-9bfa-deb2c9ea400b", 00:19:03.469 "md_size": 32, 00:19:03.469 "md_interleave": true, 00:19:03.469 "dif_type": 0, 00:19:03.469 "assigned_rate_limits": { 00:19:03.469 "rw_ios_per_sec": 0, 00:19:03.469 "rw_mbytes_per_sec": 0, 00:19:03.469 "r_mbytes_per_sec": 0, 00:19:03.469 "w_mbytes_per_sec": 0 00:19:03.469 }, 00:19:03.469 "claimed": false, 00:19:03.469 "zoned": false, 00:19:03.469 "supported_io_types": { 00:19:03.469 "read": true, 00:19:03.469 "write": true, 00:19:03.469 "unmap": false, 00:19:03.469 "flush": false, 00:19:03.469 "reset": true, 00:19:03.469 "nvme_admin": false, 00:19:03.469 "nvme_io": false, 00:19:03.469 "nvme_io_md": false, 00:19:03.469 "write_zeroes": true, 00:19:03.469 "zcopy": false, 00:19:03.469 "get_zone_info": false, 00:19:03.469 "zone_management": false, 00:19:03.469 "zone_append": false, 00:19:03.469 "compare": false, 00:19:03.469 "compare_and_write": false, 00:19:03.469 "abort": false, 00:19:03.469 "seek_hole": false, 00:19:03.469 "seek_data": false, 00:19:03.469 "copy": false, 00:19:03.469 "nvme_iov_md": false 00:19:03.469 }, 00:19:03.469 "memory_domains": [ 00:19:03.469 { 00:19:03.469 "dma_device_id": "system", 00:19:03.469 "dma_device_type": 1 00:19:03.469 }, 00:19:03.469 { 00:19:03.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.469 "dma_device_type": 2 00:19:03.469 }, 00:19:03.469 { 00:19:03.469 "dma_device_id": "system", 00:19:03.469 "dma_device_type": 1 00:19:03.469 }, 00:19:03.469 { 00:19:03.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.469 "dma_device_type": 2 00:19:03.469 } 00:19:03.469 ], 00:19:03.469 "driver_specific": { 00:19:03.469 "raid": { 00:19:03.469 "uuid": "3aa0d8ab-6307-4362-9bfa-deb2c9ea400b", 00:19:03.469 "strip_size_kb": 0, 00:19:03.469 "state": "online", 00:19:03.469 "raid_level": "raid1", 00:19:03.469 "superblock": true, 00:19:03.469 "num_base_bdevs": 2, 00:19:03.469 "num_base_bdevs_discovered": 2, 00:19:03.469 "num_base_bdevs_operational": 2, 00:19:03.469 "base_bdevs_list": [ 00:19:03.469 { 00:19:03.469 "name": "pt1", 00:19:03.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.469 "is_configured": true, 00:19:03.469 "data_offset": 256, 00:19:03.469 "data_size": 7936 00:19:03.469 }, 00:19:03.469 { 00:19:03.469 "name": "pt2", 00:19:03.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.469 "is_configured": true, 00:19:03.469 "data_offset": 256, 00:19:03.469 "data_size": 7936 00:19:03.469 } 00:19:03.469 ] 00:19:03.469 } 00:19:03.469 } 00:19:03.469 }' 00:19:03.469 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:03.727 pt2' 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.727 [2024-11-26 18:03:45.497143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3aa0d8ab-6307-4362-9bfa-deb2c9ea400b 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 3aa0d8ab-6307-4362-9bfa-deb2c9ea400b ']' 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.727 [2024-11-26 18:03:45.540727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.727 [2024-11-26 18:03:45.540807] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.727 [2024-11-26 18:03:45.540943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.727 [2024-11-26 18:03:45.541049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.727 [2024-11-26 18:03:45.541111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.727 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.987 [2024-11-26 18:03:45.656566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:03.987 [2024-11-26 18:03:45.658980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:03.987 [2024-11-26 18:03:45.659172] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:03.987 [2024-11-26 18:03:45.659302] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:03.987 [2024-11-26 18:03:45.659366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.987 [2024-11-26 18:03:45.659410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:03.987 request: 00:19:03.987 { 00:19:03.987 "name": "raid_bdev1", 00:19:03.987 "raid_level": "raid1", 00:19:03.987 "base_bdevs": [ 00:19:03.987 "malloc1", 00:19:03.987 "malloc2" 00:19:03.987 ], 00:19:03.987 "superblock": false, 00:19:03.987 "method": "bdev_raid_create", 00:19:03.987 "req_id": 1 00:19:03.987 } 00:19:03.987 Got JSON-RPC error response 00:19:03.987 response: 00:19:03.987 { 00:19:03.987 "code": -17, 00:19:03.987 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:03.987 } 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.987 [2024-11-26 18:03:45.712462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:03.987 [2024-11-26 18:03:45.712625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.987 [2024-11-26 18:03:45.712652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:03.987 [2024-11-26 18:03:45.712665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.987 [2024-11-26 18:03:45.714943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.987 [2024-11-26 18:03:45.714995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:03.987 [2024-11-26 18:03:45.715088] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:03.987 [2024-11-26 18:03:45.715169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:03.987 pt1 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.987 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.988 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:03.988 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.988 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.988 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.988 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.988 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.988 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.988 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.988 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.988 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.988 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.988 "name": "raid_bdev1", 00:19:03.988 "uuid": "3aa0d8ab-6307-4362-9bfa-deb2c9ea400b", 00:19:03.988 "strip_size_kb": 0, 00:19:03.988 "state": "configuring", 00:19:03.988 "raid_level": "raid1", 00:19:03.988 "superblock": true, 00:19:03.988 "num_base_bdevs": 2, 00:19:03.988 "num_base_bdevs_discovered": 1, 00:19:03.988 "num_base_bdevs_operational": 2, 00:19:03.988 "base_bdevs_list": [ 00:19:03.988 { 00:19:03.988 "name": "pt1", 00:19:03.988 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.988 "is_configured": true, 00:19:03.988 "data_offset": 256, 00:19:03.988 "data_size": 7936 00:19:03.988 }, 00:19:03.988 { 00:19:03.988 "name": null, 00:19:03.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.988 "is_configured": false, 00:19:03.988 "data_offset": 256, 00:19:03.988 "data_size": 7936 00:19:03.988 } 00:19:03.988 ] 00:19:03.988 }' 00:19:03.988 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.988 18:03:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.556 [2024-11-26 18:03:46.179646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:04.556 [2024-11-26 18:03:46.179786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.556 [2024-11-26 18:03:46.179862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:04.556 [2024-11-26 18:03:46.179902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.556 [2024-11-26 18:03:46.180164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.556 [2024-11-26 18:03:46.180229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:04.556 [2024-11-26 18:03:46.180321] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:04.556 [2024-11-26 18:03:46.180379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:04.556 [2024-11-26 18:03:46.180514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:04.556 [2024-11-26 18:03:46.180560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:04.556 [2024-11-26 18:03:46.180671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:04.556 [2024-11-26 18:03:46.180790] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:04.556 [2024-11-26 18:03:46.180829] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:04.556 [2024-11-26 18:03:46.180945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.556 pt2 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.556 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.556 "name": "raid_bdev1", 00:19:04.556 "uuid": "3aa0d8ab-6307-4362-9bfa-deb2c9ea400b", 00:19:04.556 "strip_size_kb": 0, 00:19:04.556 "state": "online", 00:19:04.556 "raid_level": "raid1", 00:19:04.556 "superblock": true, 00:19:04.556 "num_base_bdevs": 2, 00:19:04.556 "num_base_bdevs_discovered": 2, 00:19:04.556 "num_base_bdevs_operational": 2, 00:19:04.556 "base_bdevs_list": [ 00:19:04.556 { 00:19:04.556 "name": "pt1", 00:19:04.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.556 "is_configured": true, 00:19:04.556 "data_offset": 256, 00:19:04.556 "data_size": 7936 00:19:04.556 }, 00:19:04.556 { 00:19:04.556 "name": "pt2", 00:19:04.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.557 "is_configured": true, 00:19:04.557 "data_offset": 256, 00:19:04.557 "data_size": 7936 00:19:04.557 } 00:19:04.557 ] 00:19:04.557 }' 00:19:04.557 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.557 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.816 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:04.816 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:04.816 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:04.816 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:04.816 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:04.816 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:04.816 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.816 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:04.816 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.816 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.816 [2024-11-26 18:03:46.671111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:05.076 "name": "raid_bdev1", 00:19:05.076 "aliases": [ 00:19:05.076 "3aa0d8ab-6307-4362-9bfa-deb2c9ea400b" 00:19:05.076 ], 00:19:05.076 "product_name": "Raid Volume", 00:19:05.076 "block_size": 4128, 00:19:05.076 "num_blocks": 7936, 00:19:05.076 "uuid": "3aa0d8ab-6307-4362-9bfa-deb2c9ea400b", 00:19:05.076 "md_size": 32, 00:19:05.076 "md_interleave": true, 00:19:05.076 "dif_type": 0, 00:19:05.076 "assigned_rate_limits": { 00:19:05.076 "rw_ios_per_sec": 0, 00:19:05.076 "rw_mbytes_per_sec": 0, 00:19:05.076 "r_mbytes_per_sec": 0, 00:19:05.076 "w_mbytes_per_sec": 0 00:19:05.076 }, 00:19:05.076 "claimed": false, 00:19:05.076 "zoned": false, 00:19:05.076 "supported_io_types": { 00:19:05.076 "read": true, 00:19:05.076 "write": true, 00:19:05.076 "unmap": false, 00:19:05.076 "flush": false, 00:19:05.076 "reset": true, 00:19:05.076 "nvme_admin": false, 00:19:05.076 "nvme_io": false, 00:19:05.076 "nvme_io_md": false, 00:19:05.076 "write_zeroes": true, 00:19:05.076 "zcopy": false, 00:19:05.076 "get_zone_info": false, 00:19:05.076 "zone_management": false, 00:19:05.076 "zone_append": false, 00:19:05.076 "compare": false, 00:19:05.076 "compare_and_write": false, 00:19:05.076 "abort": false, 00:19:05.076 "seek_hole": false, 00:19:05.076 "seek_data": false, 00:19:05.076 "copy": false, 00:19:05.076 "nvme_iov_md": false 00:19:05.076 }, 00:19:05.076 "memory_domains": [ 00:19:05.076 { 00:19:05.076 "dma_device_id": "system", 00:19:05.076 "dma_device_type": 1 00:19:05.076 }, 00:19:05.076 { 00:19:05.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.076 "dma_device_type": 2 00:19:05.076 }, 00:19:05.076 { 00:19:05.076 "dma_device_id": "system", 00:19:05.076 "dma_device_type": 1 00:19:05.076 }, 00:19:05.076 { 00:19:05.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.076 "dma_device_type": 2 00:19:05.076 } 00:19:05.076 ], 00:19:05.076 "driver_specific": { 00:19:05.076 "raid": { 00:19:05.076 "uuid": "3aa0d8ab-6307-4362-9bfa-deb2c9ea400b", 00:19:05.076 "strip_size_kb": 0, 00:19:05.076 "state": "online", 00:19:05.076 "raid_level": "raid1", 00:19:05.076 "superblock": true, 00:19:05.076 "num_base_bdevs": 2, 00:19:05.076 "num_base_bdevs_discovered": 2, 00:19:05.076 "num_base_bdevs_operational": 2, 00:19:05.076 "base_bdevs_list": [ 00:19:05.076 { 00:19:05.076 "name": "pt1", 00:19:05.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.076 "is_configured": true, 00:19:05.076 "data_offset": 256, 00:19:05.076 "data_size": 7936 00:19:05.076 }, 00:19:05.076 { 00:19:05.076 "name": "pt2", 00:19:05.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.076 "is_configured": true, 00:19:05.076 "data_offset": 256, 00:19:05.076 "data_size": 7936 00:19:05.076 } 00:19:05.076 ] 00:19:05.076 } 00:19:05.076 } 00:19:05.076 }' 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:05.076 pt2' 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:05.076 [2024-11-26 18:03:46.902724] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.076 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 3aa0d8ab-6307-4362-9bfa-deb2c9ea400b '!=' 3aa0d8ab-6307-4362-9bfa-deb2c9ea400b ']' 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.336 [2024-11-26 18:03:46.950374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.336 18:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.336 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.336 "name": "raid_bdev1", 00:19:05.336 "uuid": "3aa0d8ab-6307-4362-9bfa-deb2c9ea400b", 00:19:05.336 "strip_size_kb": 0, 00:19:05.336 "state": "online", 00:19:05.336 "raid_level": "raid1", 00:19:05.336 "superblock": true, 00:19:05.336 "num_base_bdevs": 2, 00:19:05.336 "num_base_bdevs_discovered": 1, 00:19:05.336 "num_base_bdevs_operational": 1, 00:19:05.336 "base_bdevs_list": [ 00:19:05.336 { 00:19:05.336 "name": null, 00:19:05.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.336 "is_configured": false, 00:19:05.336 "data_offset": 0, 00:19:05.336 "data_size": 7936 00:19:05.336 }, 00:19:05.336 { 00:19:05.336 "name": "pt2", 00:19:05.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.336 "is_configured": true, 00:19:05.336 "data_offset": 256, 00:19:05.336 "data_size": 7936 00:19:05.336 } 00:19:05.336 ] 00:19:05.336 }' 00:19:05.336 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.336 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.595 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:05.595 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.595 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.595 [2024-11-26 18:03:47.417627] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:05.595 [2024-11-26 18:03:47.417713] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:05.595 [2024-11-26 18:03:47.417820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.595 [2024-11-26 18:03:47.417898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.595 [2024-11-26 18:03:47.417946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:05.595 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.595 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.595 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:05.595 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.595 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.595 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.854 [2024-11-26 18:03:47.489481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:05.854 [2024-11-26 18:03:47.489599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.854 [2024-11-26 18:03:47.489622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:05.854 [2024-11-26 18:03:47.489633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.854 [2024-11-26 18:03:47.491643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.854 [2024-11-26 18:03:47.491726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:05.854 [2024-11-26 18:03:47.491807] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:05.854 [2024-11-26 18:03:47.491898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:05.854 [2024-11-26 18:03:47.491997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:05.854 [2024-11-26 18:03:47.492042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:05.854 [2024-11-26 18:03:47.492175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:05.854 [2024-11-26 18:03:47.492295] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:05.854 [2024-11-26 18:03:47.492335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:05.854 [2024-11-26 18:03:47.492429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.854 pt2 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.854 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.854 "name": "raid_bdev1", 00:19:05.854 "uuid": "3aa0d8ab-6307-4362-9bfa-deb2c9ea400b", 00:19:05.854 "strip_size_kb": 0, 00:19:05.854 "state": "online", 00:19:05.854 "raid_level": "raid1", 00:19:05.854 "superblock": true, 00:19:05.854 "num_base_bdevs": 2, 00:19:05.854 "num_base_bdevs_discovered": 1, 00:19:05.854 "num_base_bdevs_operational": 1, 00:19:05.854 "base_bdevs_list": [ 00:19:05.854 { 00:19:05.854 "name": null, 00:19:05.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.854 "is_configured": false, 00:19:05.854 "data_offset": 256, 00:19:05.854 "data_size": 7936 00:19:05.854 }, 00:19:05.854 { 00:19:05.854 "name": "pt2", 00:19:05.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.854 "is_configured": true, 00:19:05.854 "data_offset": 256, 00:19:05.854 "data_size": 7936 00:19:05.854 } 00:19:05.854 ] 00:19:05.854 }' 00:19:05.855 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.855 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.113 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:06.113 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.113 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.113 [2024-11-26 18:03:47.924743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.113 [2024-11-26 18:03:47.924834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:06.113 [2024-11-26 18:03:47.924956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.113 [2024-11-26 18:03:47.925082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.113 [2024-11-26 18:03:47.925162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:06.113 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.113 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.113 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:06.113 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.113 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.113 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.372 [2024-11-26 18:03:47.984694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:06.372 [2024-11-26 18:03:47.984811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.372 [2024-11-26 18:03:47.984852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:06.372 [2024-11-26 18:03:47.984884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.372 [2024-11-26 18:03:47.987226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.372 [2024-11-26 18:03:47.987297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:06.372 [2024-11-26 18:03:47.987404] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:06.372 [2024-11-26 18:03:47.987509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:06.372 [2024-11-26 18:03:47.987655] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:06.372 [2024-11-26 18:03:47.987718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.372 [2024-11-26 18:03:47.987772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:06.372 [2024-11-26 18:03:47.987902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:06.372 [2024-11-26 18:03:47.988049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:06.372 [2024-11-26 18:03:47.988093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:06.372 [2024-11-26 18:03:47.988199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:06.372 [2024-11-26 18:03:47.988308] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:06.372 [2024-11-26 18:03:47.988347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:06.372 [2024-11-26 18:03:47.988465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.372 pt1 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.372 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.373 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.373 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.373 18:03:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.373 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.373 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.373 "name": "raid_bdev1", 00:19:06.373 "uuid": "3aa0d8ab-6307-4362-9bfa-deb2c9ea400b", 00:19:06.373 "strip_size_kb": 0, 00:19:06.373 "state": "online", 00:19:06.373 "raid_level": "raid1", 00:19:06.373 "superblock": true, 00:19:06.373 "num_base_bdevs": 2, 00:19:06.373 "num_base_bdevs_discovered": 1, 00:19:06.373 "num_base_bdevs_operational": 1, 00:19:06.373 "base_bdevs_list": [ 00:19:06.373 { 00:19:06.373 "name": null, 00:19:06.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.373 "is_configured": false, 00:19:06.373 "data_offset": 256, 00:19:06.373 "data_size": 7936 00:19:06.373 }, 00:19:06.373 { 00:19:06.373 "name": "pt2", 00:19:06.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.373 "is_configured": true, 00:19:06.373 "data_offset": 256, 00:19:06.373 "data_size": 7936 00:19:06.373 } 00:19:06.373 ] 00:19:06.373 }' 00:19:06.373 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.373 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.631 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:06.631 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.631 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.631 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:06.631 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:06.890 [2024-11-26 18:03:48.504086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 3aa0d8ab-6307-4362-9bfa-deb2c9ea400b '!=' 3aa0d8ab-6307-4362-9bfa-deb2c9ea400b ']' 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89169 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89169 ']' 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89169 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89169 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:06.890 killing process with pid 89169 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89169' 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89169 00:19:06.890 [2024-11-26 18:03:48.578683] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:06.890 18:03:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89169 00:19:06.890 [2024-11-26 18:03:48.578802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.890 [2024-11-26 18:03:48.578861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.890 [2024-11-26 18:03:48.578877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:07.148 [2024-11-26 18:03:48.811521] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:08.522 18:03:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:08.522 00:19:08.522 real 0m6.340s 00:19:08.522 user 0m9.584s 00:19:08.522 sys 0m1.083s 00:19:08.522 18:03:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.522 ************************************ 00:19:08.522 END TEST raid_superblock_test_md_interleaved 00:19:08.522 ************************************ 00:19:08.522 18:03:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.522 18:03:50 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:08.522 18:03:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:08.522 18:03:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.522 18:03:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:08.522 ************************************ 00:19:08.522 START TEST raid_rebuild_test_sb_md_interleaved 00:19:08.522 ************************************ 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89502 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89502 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89502 ']' 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.522 18:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.522 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:08.522 Zero copy mechanism will not be used. 00:19:08.522 [2024-11-26 18:03:50.223387] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:19:08.522 [2024-11-26 18:03:50.223508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89502 ] 00:19:08.783 [2024-11-26 18:03:50.387568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.783 [2024-11-26 18:03:50.518853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.043 [2024-11-26 18:03:50.750443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.043 [2024-11-26 18:03:50.750514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.300 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.300 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:09.300 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:09.300 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:09.300 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.300 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.559 BaseBdev1_malloc 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.559 [2024-11-26 18:03:51.209846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:09.559 [2024-11-26 18:03:51.209915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.559 [2024-11-26 18:03:51.209941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:09.559 [2024-11-26 18:03:51.209954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.559 [2024-11-26 18:03:51.212110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.559 [2024-11-26 18:03:51.212216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:09.559 BaseBdev1 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.559 BaseBdev2_malloc 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.559 [2024-11-26 18:03:51.267807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:09.559 [2024-11-26 18:03:51.267904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.559 [2024-11-26 18:03:51.267930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:09.559 [2024-11-26 18:03:51.267946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.559 [2024-11-26 18:03:51.270224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.559 [2024-11-26 18:03:51.270271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:09.559 BaseBdev2 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.559 spare_malloc 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.559 spare_delay 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.559 [2024-11-26 18:03:51.349320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:09.559 [2024-11-26 18:03:51.349392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.559 [2024-11-26 18:03:51.349417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:09.559 [2024-11-26 18:03:51.349429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.559 [2024-11-26 18:03:51.351581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.559 [2024-11-26 18:03:51.351682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:09.559 spare 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.559 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.559 [2024-11-26 18:03:51.357377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:09.559 [2024-11-26 18:03:51.359478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:09.559 [2024-11-26 18:03:51.359721] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:09.560 [2024-11-26 18:03:51.359742] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:09.560 [2024-11-26 18:03:51.359829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:09.560 [2024-11-26 18:03:51.359906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:09.560 [2024-11-26 18:03:51.359916] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:09.560 [2024-11-26 18:03:51.359995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.560 "name": "raid_bdev1", 00:19:09.560 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:09.560 "strip_size_kb": 0, 00:19:09.560 "state": "online", 00:19:09.560 "raid_level": "raid1", 00:19:09.560 "superblock": true, 00:19:09.560 "num_base_bdevs": 2, 00:19:09.560 "num_base_bdevs_discovered": 2, 00:19:09.560 "num_base_bdevs_operational": 2, 00:19:09.560 "base_bdevs_list": [ 00:19:09.560 { 00:19:09.560 "name": "BaseBdev1", 00:19:09.560 "uuid": "8adec5eb-553b-55bc-9be1-500657a5bee5", 00:19:09.560 "is_configured": true, 00:19:09.560 "data_offset": 256, 00:19:09.560 "data_size": 7936 00:19:09.560 }, 00:19:09.560 { 00:19:09.560 "name": "BaseBdev2", 00:19:09.560 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:09.560 "is_configured": true, 00:19:09.560 "data_offset": 256, 00:19:09.560 "data_size": 7936 00:19:09.560 } 00:19:09.560 ] 00:19:09.560 }' 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.560 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.126 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:10.126 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.126 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.126 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:10.126 [2024-11-26 18:03:51.781008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:10.126 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.126 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:10.126 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:10.126 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.126 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.127 [2024-11-26 18:03:51.876504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.127 "name": "raid_bdev1", 00:19:10.127 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:10.127 "strip_size_kb": 0, 00:19:10.127 "state": "online", 00:19:10.127 "raid_level": "raid1", 00:19:10.127 "superblock": true, 00:19:10.127 "num_base_bdevs": 2, 00:19:10.127 "num_base_bdevs_discovered": 1, 00:19:10.127 "num_base_bdevs_operational": 1, 00:19:10.127 "base_bdevs_list": [ 00:19:10.127 { 00:19:10.127 "name": null, 00:19:10.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.127 "is_configured": false, 00:19:10.127 "data_offset": 0, 00:19:10.127 "data_size": 7936 00:19:10.127 }, 00:19:10.127 { 00:19:10.127 "name": "BaseBdev2", 00:19:10.127 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:10.127 "is_configured": true, 00:19:10.127 "data_offset": 256, 00:19:10.127 "data_size": 7936 00:19:10.127 } 00:19:10.127 ] 00:19:10.127 }' 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.127 18:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.522 18:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:10.522 18:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.522 18:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.522 [2024-11-26 18:03:52.307821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:10.522 [2024-11-26 18:03:52.330207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:10.522 18:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.522 18:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:10.522 [2024-11-26 18:03:52.332481] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.895 "name": "raid_bdev1", 00:19:11.895 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:11.895 "strip_size_kb": 0, 00:19:11.895 "state": "online", 00:19:11.895 "raid_level": "raid1", 00:19:11.895 "superblock": true, 00:19:11.895 "num_base_bdevs": 2, 00:19:11.895 "num_base_bdevs_discovered": 2, 00:19:11.895 "num_base_bdevs_operational": 2, 00:19:11.895 "process": { 00:19:11.895 "type": "rebuild", 00:19:11.895 "target": "spare", 00:19:11.895 "progress": { 00:19:11.895 "blocks": 2560, 00:19:11.895 "percent": 32 00:19:11.895 } 00:19:11.895 }, 00:19:11.895 "base_bdevs_list": [ 00:19:11.895 { 00:19:11.895 "name": "spare", 00:19:11.895 "uuid": "69af47a5-8e5b-5950-84f1-6227e1241b9e", 00:19:11.895 "is_configured": true, 00:19:11.895 "data_offset": 256, 00:19:11.895 "data_size": 7936 00:19:11.895 }, 00:19:11.895 { 00:19:11.895 "name": "BaseBdev2", 00:19:11.895 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:11.895 "is_configured": true, 00:19:11.895 "data_offset": 256, 00:19:11.895 "data_size": 7936 00:19:11.895 } 00:19:11.895 ] 00:19:11.895 }' 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:11.895 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.896 [2024-11-26 18:03:53.463860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.896 [2024-11-26 18:03:53.539026] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:11.896 [2024-11-26 18:03:53.539219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.896 [2024-11-26 18:03:53.539269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.896 [2024-11-26 18:03:53.539305] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.896 "name": "raid_bdev1", 00:19:11.896 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:11.896 "strip_size_kb": 0, 00:19:11.896 "state": "online", 00:19:11.896 "raid_level": "raid1", 00:19:11.896 "superblock": true, 00:19:11.896 "num_base_bdevs": 2, 00:19:11.896 "num_base_bdevs_discovered": 1, 00:19:11.896 "num_base_bdevs_operational": 1, 00:19:11.896 "base_bdevs_list": [ 00:19:11.896 { 00:19:11.896 "name": null, 00:19:11.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.896 "is_configured": false, 00:19:11.896 "data_offset": 0, 00:19:11.896 "data_size": 7936 00:19:11.896 }, 00:19:11.896 { 00:19:11.896 "name": "BaseBdev2", 00:19:11.896 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:11.896 "is_configured": true, 00:19:11.896 "data_offset": 256, 00:19:11.896 "data_size": 7936 00:19:11.896 } 00:19:11.896 ] 00:19:11.896 }' 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.896 18:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.464 "name": "raid_bdev1", 00:19:12.464 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:12.464 "strip_size_kb": 0, 00:19:12.464 "state": "online", 00:19:12.464 "raid_level": "raid1", 00:19:12.464 "superblock": true, 00:19:12.464 "num_base_bdevs": 2, 00:19:12.464 "num_base_bdevs_discovered": 1, 00:19:12.464 "num_base_bdevs_operational": 1, 00:19:12.464 "base_bdevs_list": [ 00:19:12.464 { 00:19:12.464 "name": null, 00:19:12.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.464 "is_configured": false, 00:19:12.464 "data_offset": 0, 00:19:12.464 "data_size": 7936 00:19:12.464 }, 00:19:12.464 { 00:19:12.464 "name": "BaseBdev2", 00:19:12.464 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:12.464 "is_configured": true, 00:19:12.464 "data_offset": 256, 00:19:12.464 "data_size": 7936 00:19:12.464 } 00:19:12.464 ] 00:19:12.464 }' 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.464 [2024-11-26 18:03:54.175247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:12.464 [2024-11-26 18:03:54.194076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.464 18:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:12.464 [2024-11-26 18:03:54.196369] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.399 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.399 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.399 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.399 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.399 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.399 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.399 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.399 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.399 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.399 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.399 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.399 "name": "raid_bdev1", 00:19:13.399 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:13.399 "strip_size_kb": 0, 00:19:13.399 "state": "online", 00:19:13.399 "raid_level": "raid1", 00:19:13.399 "superblock": true, 00:19:13.399 "num_base_bdevs": 2, 00:19:13.399 "num_base_bdevs_discovered": 2, 00:19:13.399 "num_base_bdevs_operational": 2, 00:19:13.399 "process": { 00:19:13.399 "type": "rebuild", 00:19:13.399 "target": "spare", 00:19:13.399 "progress": { 00:19:13.399 "blocks": 2560, 00:19:13.399 "percent": 32 00:19:13.399 } 00:19:13.399 }, 00:19:13.399 "base_bdevs_list": [ 00:19:13.399 { 00:19:13.399 "name": "spare", 00:19:13.399 "uuid": "69af47a5-8e5b-5950-84f1-6227e1241b9e", 00:19:13.399 "is_configured": true, 00:19:13.399 "data_offset": 256, 00:19:13.399 "data_size": 7936 00:19:13.399 }, 00:19:13.399 { 00:19:13.399 "name": "BaseBdev2", 00:19:13.399 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:13.399 "is_configured": true, 00:19:13.399 "data_offset": 256, 00:19:13.399 "data_size": 7936 00:19:13.399 } 00:19:13.399 ] 00:19:13.399 }' 00:19:13.399 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:13.658 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=776 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.658 "name": "raid_bdev1", 00:19:13.658 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:13.658 "strip_size_kb": 0, 00:19:13.658 "state": "online", 00:19:13.658 "raid_level": "raid1", 00:19:13.658 "superblock": true, 00:19:13.658 "num_base_bdevs": 2, 00:19:13.658 "num_base_bdevs_discovered": 2, 00:19:13.658 "num_base_bdevs_operational": 2, 00:19:13.658 "process": { 00:19:13.658 "type": "rebuild", 00:19:13.658 "target": "spare", 00:19:13.658 "progress": { 00:19:13.658 "blocks": 2816, 00:19:13.658 "percent": 35 00:19:13.658 } 00:19:13.658 }, 00:19:13.658 "base_bdevs_list": [ 00:19:13.658 { 00:19:13.658 "name": "spare", 00:19:13.658 "uuid": "69af47a5-8e5b-5950-84f1-6227e1241b9e", 00:19:13.658 "is_configured": true, 00:19:13.658 "data_offset": 256, 00:19:13.658 "data_size": 7936 00:19:13.658 }, 00:19:13.658 { 00:19:13.658 "name": "BaseBdev2", 00:19:13.658 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:13.658 "is_configured": true, 00:19:13.658 "data_offset": 256, 00:19:13.658 "data_size": 7936 00:19:13.658 } 00:19:13.658 ] 00:19:13.658 }' 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.658 18:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.031 "name": "raid_bdev1", 00:19:15.031 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:15.031 "strip_size_kb": 0, 00:19:15.031 "state": "online", 00:19:15.031 "raid_level": "raid1", 00:19:15.031 "superblock": true, 00:19:15.031 "num_base_bdevs": 2, 00:19:15.031 "num_base_bdevs_discovered": 2, 00:19:15.031 "num_base_bdevs_operational": 2, 00:19:15.031 "process": { 00:19:15.031 "type": "rebuild", 00:19:15.031 "target": "spare", 00:19:15.031 "progress": { 00:19:15.031 "blocks": 5632, 00:19:15.031 "percent": 70 00:19:15.031 } 00:19:15.031 }, 00:19:15.031 "base_bdevs_list": [ 00:19:15.031 { 00:19:15.031 "name": "spare", 00:19:15.031 "uuid": "69af47a5-8e5b-5950-84f1-6227e1241b9e", 00:19:15.031 "is_configured": true, 00:19:15.031 "data_offset": 256, 00:19:15.031 "data_size": 7936 00:19:15.031 }, 00:19:15.031 { 00:19:15.031 "name": "BaseBdev2", 00:19:15.031 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:15.031 "is_configured": true, 00:19:15.031 "data_offset": 256, 00:19:15.031 "data_size": 7936 00:19:15.031 } 00:19:15.031 ] 00:19:15.031 }' 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.031 18:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:15.595 [2024-11-26 18:03:57.312161] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:15.595 [2024-11-26 18:03:57.312254] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:15.595 [2024-11-26 18:03:57.312405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.856 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.856 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.856 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.856 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.856 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.856 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.856 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.856 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.856 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.856 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.856 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.856 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.856 "name": "raid_bdev1", 00:19:15.856 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:15.856 "strip_size_kb": 0, 00:19:15.856 "state": "online", 00:19:15.856 "raid_level": "raid1", 00:19:15.856 "superblock": true, 00:19:15.856 "num_base_bdevs": 2, 00:19:15.856 "num_base_bdevs_discovered": 2, 00:19:15.856 "num_base_bdevs_operational": 2, 00:19:15.856 "base_bdevs_list": [ 00:19:15.856 { 00:19:15.856 "name": "spare", 00:19:15.856 "uuid": "69af47a5-8e5b-5950-84f1-6227e1241b9e", 00:19:15.856 "is_configured": true, 00:19:15.856 "data_offset": 256, 00:19:15.856 "data_size": 7936 00:19:15.856 }, 00:19:15.856 { 00:19:15.856 "name": "BaseBdev2", 00:19:15.856 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:15.856 "is_configured": true, 00:19:15.856 "data_offset": 256, 00:19:15.856 "data_size": 7936 00:19:15.856 } 00:19:15.856 ] 00:19:15.856 }' 00:19:15.856 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.856 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:15.856 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.116 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:16.116 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:16.116 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:16.116 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.116 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:16.116 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:16.116 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.116 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.117 "name": "raid_bdev1", 00:19:16.117 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:16.117 "strip_size_kb": 0, 00:19:16.117 "state": "online", 00:19:16.117 "raid_level": "raid1", 00:19:16.117 "superblock": true, 00:19:16.117 "num_base_bdevs": 2, 00:19:16.117 "num_base_bdevs_discovered": 2, 00:19:16.117 "num_base_bdevs_operational": 2, 00:19:16.117 "base_bdevs_list": [ 00:19:16.117 { 00:19:16.117 "name": "spare", 00:19:16.117 "uuid": "69af47a5-8e5b-5950-84f1-6227e1241b9e", 00:19:16.117 "is_configured": true, 00:19:16.117 "data_offset": 256, 00:19:16.117 "data_size": 7936 00:19:16.117 }, 00:19:16.117 { 00:19:16.117 "name": "BaseBdev2", 00:19:16.117 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:16.117 "is_configured": true, 00:19:16.117 "data_offset": 256, 00:19:16.117 "data_size": 7936 00:19:16.117 } 00:19:16.117 ] 00:19:16.117 }' 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.117 "name": "raid_bdev1", 00:19:16.117 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:16.117 "strip_size_kb": 0, 00:19:16.117 "state": "online", 00:19:16.117 "raid_level": "raid1", 00:19:16.117 "superblock": true, 00:19:16.117 "num_base_bdevs": 2, 00:19:16.117 "num_base_bdevs_discovered": 2, 00:19:16.117 "num_base_bdevs_operational": 2, 00:19:16.117 "base_bdevs_list": [ 00:19:16.117 { 00:19:16.117 "name": "spare", 00:19:16.117 "uuid": "69af47a5-8e5b-5950-84f1-6227e1241b9e", 00:19:16.117 "is_configured": true, 00:19:16.117 "data_offset": 256, 00:19:16.117 "data_size": 7936 00:19:16.117 }, 00:19:16.117 { 00:19:16.117 "name": "BaseBdev2", 00:19:16.117 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:16.117 "is_configured": true, 00:19:16.117 "data_offset": 256, 00:19:16.117 "data_size": 7936 00:19:16.117 } 00:19:16.117 ] 00:19:16.117 }' 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.117 18:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.680 [2024-11-26 18:03:58.324108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:16.680 [2024-11-26 18:03:58.324147] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.680 [2024-11-26 18:03:58.324264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.680 [2024-11-26 18:03:58.324343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.680 [2024-11-26 18:03:58.324357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.680 [2024-11-26 18:03:58.387997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:16.680 [2024-11-26 18:03:58.388104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.680 [2024-11-26 18:03:58.388134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:16.680 [2024-11-26 18:03:58.388145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.680 [2024-11-26 18:03:58.390496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.680 [2024-11-26 18:03:58.390544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:16.680 [2024-11-26 18:03:58.390624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:16.680 [2024-11-26 18:03:58.390698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.680 [2024-11-26 18:03:58.390838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:16.680 spare 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.680 [2024-11-26 18:03:58.490768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:16.680 [2024-11-26 18:03:58.490969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:16.680 [2024-11-26 18:03:58.491183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:16.680 [2024-11-26 18:03:58.491342] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:16.680 [2024-11-26 18:03:58.491358] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:16.680 [2024-11-26 18:03:58.491498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.680 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.681 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.681 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.681 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:16.681 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.681 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.681 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.681 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.681 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.681 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.681 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.681 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.681 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.940 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.940 "name": "raid_bdev1", 00:19:16.940 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:16.940 "strip_size_kb": 0, 00:19:16.940 "state": "online", 00:19:16.940 "raid_level": "raid1", 00:19:16.940 "superblock": true, 00:19:16.940 "num_base_bdevs": 2, 00:19:16.940 "num_base_bdevs_discovered": 2, 00:19:16.940 "num_base_bdevs_operational": 2, 00:19:16.940 "base_bdevs_list": [ 00:19:16.940 { 00:19:16.940 "name": "spare", 00:19:16.940 "uuid": "69af47a5-8e5b-5950-84f1-6227e1241b9e", 00:19:16.940 "is_configured": true, 00:19:16.940 "data_offset": 256, 00:19:16.940 "data_size": 7936 00:19:16.940 }, 00:19:16.940 { 00:19:16.940 "name": "BaseBdev2", 00:19:16.940 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:16.940 "is_configured": true, 00:19:16.940 "data_offset": 256, 00:19:16.940 "data_size": 7936 00:19:16.940 } 00:19:16.940 ] 00:19:16.940 }' 00:19:16.940 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.940 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.198 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:17.198 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.198 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:17.198 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:17.198 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.198 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.198 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.198 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.198 18:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.198 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.198 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.198 "name": "raid_bdev1", 00:19:17.198 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:17.198 "strip_size_kb": 0, 00:19:17.198 "state": "online", 00:19:17.198 "raid_level": "raid1", 00:19:17.198 "superblock": true, 00:19:17.198 "num_base_bdevs": 2, 00:19:17.198 "num_base_bdevs_discovered": 2, 00:19:17.198 "num_base_bdevs_operational": 2, 00:19:17.198 "base_bdevs_list": [ 00:19:17.198 { 00:19:17.198 "name": "spare", 00:19:17.198 "uuid": "69af47a5-8e5b-5950-84f1-6227e1241b9e", 00:19:17.198 "is_configured": true, 00:19:17.198 "data_offset": 256, 00:19:17.198 "data_size": 7936 00:19:17.198 }, 00:19:17.198 { 00:19:17.198 "name": "BaseBdev2", 00:19:17.198 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:17.198 "is_configured": true, 00:19:17.198 "data_offset": 256, 00:19:17.198 "data_size": 7936 00:19:17.198 } 00:19:17.198 ] 00:19:17.198 }' 00:19:17.198 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.464 [2024-11-26 18:03:59.143278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.464 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.465 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.465 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.465 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:17.465 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.465 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.465 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.465 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.465 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.465 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.465 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.465 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.465 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.465 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.465 "name": "raid_bdev1", 00:19:17.465 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:17.465 "strip_size_kb": 0, 00:19:17.465 "state": "online", 00:19:17.465 "raid_level": "raid1", 00:19:17.465 "superblock": true, 00:19:17.465 "num_base_bdevs": 2, 00:19:17.465 "num_base_bdevs_discovered": 1, 00:19:17.465 "num_base_bdevs_operational": 1, 00:19:17.465 "base_bdevs_list": [ 00:19:17.465 { 00:19:17.465 "name": null, 00:19:17.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.465 "is_configured": false, 00:19:17.465 "data_offset": 0, 00:19:17.465 "data_size": 7936 00:19:17.465 }, 00:19:17.465 { 00:19:17.465 "name": "BaseBdev2", 00:19:17.465 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:17.465 "is_configured": true, 00:19:17.465 "data_offset": 256, 00:19:17.465 "data_size": 7936 00:19:17.465 } 00:19:17.465 ] 00:19:17.465 }' 00:19:17.465 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.465 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.736 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:17.736 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.736 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.736 [2024-11-26 18:03:59.574584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:17.736 [2024-11-26 18:03:59.574903] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:17.736 [2024-11-26 18:03:59.574989] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:17.736 [2024-11-26 18:03:59.575161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.007 [2024-11-26 18:03:59.594460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:18.007 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.007 18:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:18.007 [2024-11-26 18:03:59.596802] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.947 "name": "raid_bdev1", 00:19:18.947 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:18.947 "strip_size_kb": 0, 00:19:18.947 "state": "online", 00:19:18.947 "raid_level": "raid1", 00:19:18.947 "superblock": true, 00:19:18.947 "num_base_bdevs": 2, 00:19:18.947 "num_base_bdevs_discovered": 2, 00:19:18.947 "num_base_bdevs_operational": 2, 00:19:18.947 "process": { 00:19:18.947 "type": "rebuild", 00:19:18.947 "target": "spare", 00:19:18.947 "progress": { 00:19:18.947 "blocks": 2560, 00:19:18.947 "percent": 32 00:19:18.947 } 00:19:18.947 }, 00:19:18.947 "base_bdevs_list": [ 00:19:18.947 { 00:19:18.947 "name": "spare", 00:19:18.947 "uuid": "69af47a5-8e5b-5950-84f1-6227e1241b9e", 00:19:18.947 "is_configured": true, 00:19:18.947 "data_offset": 256, 00:19:18.947 "data_size": 7936 00:19:18.947 }, 00:19:18.947 { 00:19:18.947 "name": "BaseBdev2", 00:19:18.947 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:18.947 "is_configured": true, 00:19:18.947 "data_offset": 256, 00:19:18.947 "data_size": 7936 00:19:18.947 } 00:19:18.947 ] 00:19:18.947 }' 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.947 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.947 [2024-11-26 18:04:00.755673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.947 [2024-11-26 18:04:00.803457] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:18.947 [2024-11-26 18:04:00.803562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.947 [2024-11-26 18:04:00.803581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.947 [2024-11-26 18:04:00.803593] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.206 "name": "raid_bdev1", 00:19:19.206 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:19.206 "strip_size_kb": 0, 00:19:19.206 "state": "online", 00:19:19.206 "raid_level": "raid1", 00:19:19.206 "superblock": true, 00:19:19.206 "num_base_bdevs": 2, 00:19:19.206 "num_base_bdevs_discovered": 1, 00:19:19.206 "num_base_bdevs_operational": 1, 00:19:19.206 "base_bdevs_list": [ 00:19:19.206 { 00:19:19.206 "name": null, 00:19:19.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.206 "is_configured": false, 00:19:19.206 "data_offset": 0, 00:19:19.206 "data_size": 7936 00:19:19.206 }, 00:19:19.206 { 00:19:19.206 "name": "BaseBdev2", 00:19:19.206 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:19.206 "is_configured": true, 00:19:19.206 "data_offset": 256, 00:19:19.206 "data_size": 7936 00:19:19.206 } 00:19:19.206 ] 00:19:19.206 }' 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.206 18:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.773 18:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:19.773 18:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.773 18:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.773 [2024-11-26 18:04:01.335468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:19.773 [2024-11-26 18:04:01.335639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.773 [2024-11-26 18:04:01.335706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:19.773 [2024-11-26 18:04:01.335746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.773 [2024-11-26 18:04:01.336009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.773 [2024-11-26 18:04:01.336087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:19.773 [2024-11-26 18:04:01.336188] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:19.773 [2024-11-26 18:04:01.336234] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:19.773 [2024-11-26 18:04:01.336282] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:19.773 [2024-11-26 18:04:01.336338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.773 [2024-11-26 18:04:01.356754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:19.773 spare 00:19:19.773 18:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.773 18:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:19.773 [2024-11-26 18:04:01.359136] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.711 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.711 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.711 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.711 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.711 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.711 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.711 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.711 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.712 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.712 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.712 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.712 "name": "raid_bdev1", 00:19:20.712 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:20.712 "strip_size_kb": 0, 00:19:20.712 "state": "online", 00:19:20.712 "raid_level": "raid1", 00:19:20.712 "superblock": true, 00:19:20.712 "num_base_bdevs": 2, 00:19:20.712 "num_base_bdevs_discovered": 2, 00:19:20.712 "num_base_bdevs_operational": 2, 00:19:20.712 "process": { 00:19:20.712 "type": "rebuild", 00:19:20.712 "target": "spare", 00:19:20.712 "progress": { 00:19:20.712 "blocks": 2560, 00:19:20.712 "percent": 32 00:19:20.712 } 00:19:20.712 }, 00:19:20.712 "base_bdevs_list": [ 00:19:20.712 { 00:19:20.712 "name": "spare", 00:19:20.712 "uuid": "69af47a5-8e5b-5950-84f1-6227e1241b9e", 00:19:20.712 "is_configured": true, 00:19:20.712 "data_offset": 256, 00:19:20.712 "data_size": 7936 00:19:20.712 }, 00:19:20.712 { 00:19:20.712 "name": "BaseBdev2", 00:19:20.712 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:20.712 "is_configured": true, 00:19:20.712 "data_offset": 256, 00:19:20.712 "data_size": 7936 00:19:20.712 } 00:19:20.712 ] 00:19:20.712 }' 00:19:20.712 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.712 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.712 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.712 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.712 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:20.712 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.712 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.712 [2024-11-26 18:04:02.522452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.712 [2024-11-26 18:04:02.565623] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:20.712 [2024-11-26 18:04:02.565715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.712 [2024-11-26 18:04:02.565738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.712 [2024-11-26 18:04:02.565746] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.971 "name": "raid_bdev1", 00:19:20.971 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:20.971 "strip_size_kb": 0, 00:19:20.971 "state": "online", 00:19:20.971 "raid_level": "raid1", 00:19:20.971 "superblock": true, 00:19:20.971 "num_base_bdevs": 2, 00:19:20.971 "num_base_bdevs_discovered": 1, 00:19:20.971 "num_base_bdevs_operational": 1, 00:19:20.971 "base_bdevs_list": [ 00:19:20.971 { 00:19:20.971 "name": null, 00:19:20.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.971 "is_configured": false, 00:19:20.971 "data_offset": 0, 00:19:20.971 "data_size": 7936 00:19:20.971 }, 00:19:20.971 { 00:19:20.971 "name": "BaseBdev2", 00:19:20.971 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:20.971 "is_configured": true, 00:19:20.971 "data_offset": 256, 00:19:20.971 "data_size": 7936 00:19:20.971 } 00:19:20.971 ] 00:19:20.971 }' 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.971 18:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.230 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:21.230 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.230 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:21.230 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:21.230 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.230 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.230 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.230 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.230 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.230 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.500 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.500 "name": "raid_bdev1", 00:19:21.500 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:21.500 "strip_size_kb": 0, 00:19:21.500 "state": "online", 00:19:21.500 "raid_level": "raid1", 00:19:21.500 "superblock": true, 00:19:21.500 "num_base_bdevs": 2, 00:19:21.500 "num_base_bdevs_discovered": 1, 00:19:21.500 "num_base_bdevs_operational": 1, 00:19:21.500 "base_bdevs_list": [ 00:19:21.500 { 00:19:21.500 "name": null, 00:19:21.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.500 "is_configured": false, 00:19:21.500 "data_offset": 0, 00:19:21.500 "data_size": 7936 00:19:21.500 }, 00:19:21.500 { 00:19:21.500 "name": "BaseBdev2", 00:19:21.500 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:21.500 "is_configured": true, 00:19:21.500 "data_offset": 256, 00:19:21.500 "data_size": 7936 00:19:21.500 } 00:19:21.500 ] 00:19:21.500 }' 00:19:21.500 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.500 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:21.500 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.500 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:21.500 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:21.500 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.500 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.500 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.500 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:21.500 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.500 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.500 [2024-11-26 18:04:03.221417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:21.500 [2024-11-26 18:04:03.221502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.500 [2024-11-26 18:04:03.221529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:21.500 [2024-11-26 18:04:03.221540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.500 [2024-11-26 18:04:03.221767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.500 [2024-11-26 18:04:03.221785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:21.500 [2024-11-26 18:04:03.221849] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:21.500 [2024-11-26 18:04:03.221863] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:21.500 [2024-11-26 18:04:03.221875] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:21.500 [2024-11-26 18:04:03.221887] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:21.500 BaseBdev1 00:19:21.500 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.500 18:04:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.436 "name": "raid_bdev1", 00:19:22.436 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:22.436 "strip_size_kb": 0, 00:19:22.436 "state": "online", 00:19:22.436 "raid_level": "raid1", 00:19:22.436 "superblock": true, 00:19:22.436 "num_base_bdevs": 2, 00:19:22.436 "num_base_bdevs_discovered": 1, 00:19:22.436 "num_base_bdevs_operational": 1, 00:19:22.436 "base_bdevs_list": [ 00:19:22.436 { 00:19:22.436 "name": null, 00:19:22.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.436 "is_configured": false, 00:19:22.436 "data_offset": 0, 00:19:22.436 "data_size": 7936 00:19:22.436 }, 00:19:22.436 { 00:19:22.436 "name": "BaseBdev2", 00:19:22.436 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:22.436 "is_configured": true, 00:19:22.436 "data_offset": 256, 00:19:22.436 "data_size": 7936 00:19:22.436 } 00:19:22.436 ] 00:19:22.436 }' 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.436 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.004 "name": "raid_bdev1", 00:19:23.004 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:23.004 "strip_size_kb": 0, 00:19:23.004 "state": "online", 00:19:23.004 "raid_level": "raid1", 00:19:23.004 "superblock": true, 00:19:23.004 "num_base_bdevs": 2, 00:19:23.004 "num_base_bdevs_discovered": 1, 00:19:23.004 "num_base_bdevs_operational": 1, 00:19:23.004 "base_bdevs_list": [ 00:19:23.004 { 00:19:23.004 "name": null, 00:19:23.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.004 "is_configured": false, 00:19:23.004 "data_offset": 0, 00:19:23.004 "data_size": 7936 00:19:23.004 }, 00:19:23.004 { 00:19:23.004 "name": "BaseBdev2", 00:19:23.004 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:23.004 "is_configured": true, 00:19:23.004 "data_offset": 256, 00:19:23.004 "data_size": 7936 00:19:23.004 } 00:19:23.004 ] 00:19:23.004 }' 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.004 [2024-11-26 18:04:04.822928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.004 [2024-11-26 18:04:04.823174] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:23.004 [2024-11-26 18:04:04.823248] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:23.004 request: 00:19:23.004 { 00:19:23.004 "base_bdev": "BaseBdev1", 00:19:23.004 "raid_bdev": "raid_bdev1", 00:19:23.004 "method": "bdev_raid_add_base_bdev", 00:19:23.004 "req_id": 1 00:19:23.004 } 00:19:23.004 Got JSON-RPC error response 00:19:23.004 response: 00:19:23.004 { 00:19:23.004 "code": -22, 00:19:23.004 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:23.004 } 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:23.004 18:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.380 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.380 "name": "raid_bdev1", 00:19:24.381 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:24.381 "strip_size_kb": 0, 00:19:24.381 "state": "online", 00:19:24.381 "raid_level": "raid1", 00:19:24.381 "superblock": true, 00:19:24.381 "num_base_bdevs": 2, 00:19:24.381 "num_base_bdevs_discovered": 1, 00:19:24.381 "num_base_bdevs_operational": 1, 00:19:24.381 "base_bdevs_list": [ 00:19:24.381 { 00:19:24.381 "name": null, 00:19:24.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.381 "is_configured": false, 00:19:24.381 "data_offset": 0, 00:19:24.381 "data_size": 7936 00:19:24.381 }, 00:19:24.381 { 00:19:24.381 "name": "BaseBdev2", 00:19:24.381 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:24.381 "is_configured": true, 00:19:24.381 "data_offset": 256, 00:19:24.381 "data_size": 7936 00:19:24.381 } 00:19:24.381 ] 00:19:24.381 }' 00:19:24.381 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.381 18:04:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.639 "name": "raid_bdev1", 00:19:24.639 "uuid": "bf9cecba-c311-4972-bb8e-f05ceaceaa8c", 00:19:24.639 "strip_size_kb": 0, 00:19:24.639 "state": "online", 00:19:24.639 "raid_level": "raid1", 00:19:24.639 "superblock": true, 00:19:24.639 "num_base_bdevs": 2, 00:19:24.639 "num_base_bdevs_discovered": 1, 00:19:24.639 "num_base_bdevs_operational": 1, 00:19:24.639 "base_bdevs_list": [ 00:19:24.639 { 00:19:24.639 "name": null, 00:19:24.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.639 "is_configured": false, 00:19:24.639 "data_offset": 0, 00:19:24.639 "data_size": 7936 00:19:24.639 }, 00:19:24.639 { 00:19:24.639 "name": "BaseBdev2", 00:19:24.639 "uuid": "e9fc6a2d-ffe0-53aa-bf9d-bfab79917c73", 00:19:24.639 "is_configured": true, 00:19:24.639 "data_offset": 256, 00:19:24.639 "data_size": 7936 00:19:24.639 } 00:19:24.639 ] 00:19:24.639 }' 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89502 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89502 ']' 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89502 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89502 00:19:24.639 killing process with pid 89502 00:19:24.639 Received shutdown signal, test time was about 60.000000 seconds 00:19:24.639 00:19:24.639 Latency(us) 00:19:24.639 [2024-11-26T18:04:06.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.639 [2024-11-26T18:04:06.502Z] =================================================================================================================== 00:19:24.639 [2024-11-26T18:04:06.502Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89502' 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89502 00:19:24.639 [2024-11-26 18:04:06.462459] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:24.639 18:04:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89502 00:19:24.639 [2024-11-26 18:04:06.462612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:24.639 [2024-11-26 18:04:06.462673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:24.639 [2024-11-26 18:04:06.462687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:25.205 [2024-11-26 18:04:06.835897] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:26.582 18:04:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:26.582 00:19:26.582 real 0m18.052s 00:19:26.582 user 0m23.640s 00:19:26.582 sys 0m1.546s 00:19:26.582 18:04:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.582 ************************************ 00:19:26.582 END TEST raid_rebuild_test_sb_md_interleaved 00:19:26.582 ************************************ 00:19:26.582 18:04:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.582 18:04:08 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:26.582 18:04:08 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:26.582 18:04:08 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89502 ']' 00:19:26.582 18:04:08 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89502 00:19:26.582 18:04:08 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:26.582 ************************************ 00:19:26.582 END TEST bdev_raid 00:19:26.582 ************************************ 00:19:26.582 00:19:26.582 real 12m38.406s 00:19:26.582 user 17m5.401s 00:19:26.582 sys 1m55.742s 00:19:26.582 18:04:08 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.582 18:04:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:26.582 18:04:08 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:26.582 18:04:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:26.582 18:04:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.582 18:04:08 -- common/autotest_common.sh@10 -- # set +x 00:19:26.582 ************************************ 00:19:26.582 START TEST spdkcli_raid 00:19:26.582 ************************************ 00:19:26.582 18:04:08 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:26.582 * Looking for test storage... 00:19:26.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:26.843 18:04:08 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:26.843 18:04:08 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:26.843 18:04:08 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:26.843 18:04:08 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.843 18:04:08 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:26.843 18:04:08 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.843 18:04:08 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:26.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.843 --rc genhtml_branch_coverage=1 00:19:26.843 --rc genhtml_function_coverage=1 00:19:26.843 --rc genhtml_legend=1 00:19:26.843 --rc geninfo_all_blocks=1 00:19:26.843 --rc geninfo_unexecuted_blocks=1 00:19:26.843 00:19:26.843 ' 00:19:26.843 18:04:08 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:26.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.843 --rc genhtml_branch_coverage=1 00:19:26.843 --rc genhtml_function_coverage=1 00:19:26.843 --rc genhtml_legend=1 00:19:26.843 --rc geninfo_all_blocks=1 00:19:26.843 --rc geninfo_unexecuted_blocks=1 00:19:26.843 00:19:26.843 ' 00:19:26.843 18:04:08 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:26.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.843 --rc genhtml_branch_coverage=1 00:19:26.843 --rc genhtml_function_coverage=1 00:19:26.843 --rc genhtml_legend=1 00:19:26.843 --rc geninfo_all_blocks=1 00:19:26.843 --rc geninfo_unexecuted_blocks=1 00:19:26.843 00:19:26.843 ' 00:19:26.843 18:04:08 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:26.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.843 --rc genhtml_branch_coverage=1 00:19:26.843 --rc genhtml_function_coverage=1 00:19:26.843 --rc genhtml_legend=1 00:19:26.843 --rc geninfo_all_blocks=1 00:19:26.843 --rc geninfo_unexecuted_blocks=1 00:19:26.844 00:19:26.844 ' 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:26.844 18:04:08 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:26.844 18:04:08 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.844 18:04:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90184 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:26.844 18:04:08 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90184 00:19:26.844 18:04:08 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90184 ']' 00:19:26.844 18:04:08 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.844 18:04:08 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.844 18:04:08 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.844 18:04:08 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.844 18:04:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.109 [2024-11-26 18:04:08.707730] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:19:27.109 [2024-11-26 18:04:08.707873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90184 ] 00:19:27.109 [2024-11-26 18:04:08.889161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:27.368 [2024-11-26 18:04:09.027559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.368 [2024-11-26 18:04:09.027600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.303 18:04:10 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.303 18:04:10 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:19:28.303 18:04:10 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:28.303 18:04:10 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.303 18:04:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.303 18:04:10 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:28.303 18:04:10 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:28.303 18:04:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.303 18:04:10 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:28.303 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:28.303 ' 00:19:30.209 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:30.209 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:30.209 18:04:11 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:30.209 18:04:11 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:30.209 18:04:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:30.209 18:04:11 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:30.209 18:04:11 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.209 18:04:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:30.209 18:04:11 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:30.209 ' 00:19:31.168 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:31.425 18:04:13 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:31.425 18:04:13 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.425 18:04:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.425 18:04:13 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:31.425 18:04:13 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.425 18:04:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.425 18:04:13 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:31.425 18:04:13 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:31.993 18:04:13 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:31.993 18:04:13 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:31.993 18:04:13 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:31.993 18:04:13 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.993 18:04:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.993 18:04:13 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:31.993 18:04:13 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.993 18:04:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.993 18:04:13 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:31.993 ' 00:19:33.374 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:33.374 18:04:14 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:33.374 18:04:14 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:33.374 18:04:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.374 18:04:14 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:33.374 18:04:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:33.374 18:04:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.374 18:04:14 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:33.374 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:33.374 ' 00:19:34.749 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:34.749 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:34.749 18:04:16 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:34.749 18:04:16 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:34.749 18:04:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.749 18:04:16 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90184 00:19:34.749 18:04:16 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90184 ']' 00:19:34.749 18:04:16 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90184 00:19:34.749 18:04:16 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:34.749 18:04:16 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.749 18:04:16 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90184 00:19:35.007 18:04:16 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.007 killing process with pid 90184 00:19:35.007 18:04:16 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.007 18:04:16 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90184' 00:19:35.007 18:04:16 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90184 00:19:35.007 18:04:16 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90184 00:19:38.300 18:04:19 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:38.300 18:04:19 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90184 ']' 00:19:38.300 18:04:19 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90184 00:19:38.300 18:04:19 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90184 ']' 00:19:38.300 18:04:19 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90184 00:19:38.300 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90184) - No such process 00:19:38.300 18:04:19 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90184 is not found' 00:19:38.300 Process with pid 90184 is not found 00:19:38.300 18:04:19 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:38.300 18:04:19 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:38.300 18:04:19 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:38.300 18:04:19 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:38.300 00:19:38.300 real 0m11.168s 00:19:38.300 user 0m23.128s 00:19:38.300 sys 0m1.208s 00:19:38.300 18:04:19 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.300 18:04:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:38.300 ************************************ 00:19:38.300 END TEST spdkcli_raid 00:19:38.300 ************************************ 00:19:38.300 18:04:19 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:38.300 18:04:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:38.300 18:04:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.300 18:04:19 -- common/autotest_common.sh@10 -- # set +x 00:19:38.300 ************************************ 00:19:38.300 START TEST blockdev_raid5f 00:19:38.300 ************************************ 00:19:38.300 18:04:19 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:38.300 * Looking for test storage... 00:19:38.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:38.300 18:04:19 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:38.300 18:04:19 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:19:38.300 18:04:19 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:38.300 18:04:19 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:38.300 18:04:19 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:38.300 18:04:19 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:38.300 18:04:19 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:38.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.300 --rc genhtml_branch_coverage=1 00:19:38.300 --rc genhtml_function_coverage=1 00:19:38.300 --rc genhtml_legend=1 00:19:38.300 --rc geninfo_all_blocks=1 00:19:38.300 --rc geninfo_unexecuted_blocks=1 00:19:38.300 00:19:38.300 ' 00:19:38.300 18:04:19 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:38.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.300 --rc genhtml_branch_coverage=1 00:19:38.300 --rc genhtml_function_coverage=1 00:19:38.300 --rc genhtml_legend=1 00:19:38.300 --rc geninfo_all_blocks=1 00:19:38.300 --rc geninfo_unexecuted_blocks=1 00:19:38.300 00:19:38.300 ' 00:19:38.301 18:04:19 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:38.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.301 --rc genhtml_branch_coverage=1 00:19:38.301 --rc genhtml_function_coverage=1 00:19:38.301 --rc genhtml_legend=1 00:19:38.301 --rc geninfo_all_blocks=1 00:19:38.301 --rc geninfo_unexecuted_blocks=1 00:19:38.301 00:19:38.301 ' 00:19:38.301 18:04:19 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:38.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.301 --rc genhtml_branch_coverage=1 00:19:38.301 --rc genhtml_function_coverage=1 00:19:38.301 --rc genhtml_legend=1 00:19:38.301 --rc geninfo_all_blocks=1 00:19:38.301 --rc geninfo_unexecuted_blocks=1 00:19:38.301 00:19:38.301 ' 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90470 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:38.301 18:04:19 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90470 00:19:38.301 18:04:19 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90470 ']' 00:19:38.301 18:04:19 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.301 18:04:19 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.301 18:04:19 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.301 18:04:19 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.301 18:04:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:38.301 [2024-11-26 18:04:19.920891] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:19:38.301 [2024-11-26 18:04:19.921216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90470 ] 00:19:38.301 [2024-11-26 18:04:20.101750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.560 [2024-11-26 18:04:20.235982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.495 18:04:21 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.495 18:04:21 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:39.495 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:39.495 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:19:39.495 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:39.495 18:04:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.495 18:04:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:39.495 Malloc0 00:19:39.495 Malloc1 00:19:39.812 Malloc2 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.812 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.812 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:19:39.812 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.812 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.812 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.812 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:39.812 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:39.812 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.812 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:39.812 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:39.812 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "42f3378a-067d-4482-961e-b004af1ef062"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "42f3378a-067d-4482-961e-b004af1ef062",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "42f3378a-067d-4482-961e-b004af1ef062",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "821b454a-e548-4c45-8202-bf8cb1112e16",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "21c3f391-3cb2-4408-be36-6ab9467ff3fe",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "8f154073-9f89-4129-96bc-05f85c9baede",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:39.812 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:39.812 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:19:39.812 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:39.812 18:04:21 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90470 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90470 ']' 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90470 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90470 00:19:39.812 killing process with pid 90470 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.812 18:04:21 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90470' 00:19:39.813 18:04:21 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90470 00:19:39.813 18:04:21 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90470 00:19:43.108 18:04:24 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:43.108 18:04:24 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:43.108 18:04:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:43.108 18:04:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.108 18:04:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:43.108 ************************************ 00:19:43.108 START TEST bdev_hello_world 00:19:43.108 ************************************ 00:19:43.108 18:04:24 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:43.108 [2024-11-26 18:04:24.801569] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:19:43.108 [2024-11-26 18:04:24.801808] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90543 ] 00:19:43.108 [2024-11-26 18:04:24.962825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.367 [2024-11-26 18:04:25.097110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.937 [2024-11-26 18:04:25.696494] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:43.937 [2024-11-26 18:04:25.696560] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:43.937 [2024-11-26 18:04:25.696585] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:43.937 [2024-11-26 18:04:25.697204] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:43.937 [2024-11-26 18:04:25.697397] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:43.937 [2024-11-26 18:04:25.697424] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:43.937 [2024-11-26 18:04:25.697514] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:43.937 00:19:43.937 [2024-11-26 18:04:25.697540] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:45.854 00:19:45.854 real 0m2.549s 00:19:45.854 user 0m2.170s 00:19:45.854 sys 0m0.255s 00:19:45.854 18:04:27 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.854 ************************************ 00:19:45.854 END TEST bdev_hello_world 00:19:45.854 ************************************ 00:19:45.854 18:04:27 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:45.854 18:04:27 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:45.854 18:04:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:45.854 18:04:27 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.854 18:04:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:45.854 ************************************ 00:19:45.854 START TEST bdev_bounds 00:19:45.854 ************************************ 00:19:45.854 18:04:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:45.854 18:04:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90591 00:19:45.854 18:04:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:45.854 18:04:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:45.854 Process bdevio pid: 90591 00:19:45.854 18:04:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90591' 00:19:45.854 18:04:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90591 00:19:45.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.854 18:04:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90591 ']' 00:19:45.854 18:04:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.854 18:04:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.854 18:04:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.854 18:04:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.854 18:04:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:45.854 [2024-11-26 18:04:27.416337] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:19:45.854 [2024-11-26 18:04:27.416481] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90591 ] 00:19:45.854 [2024-11-26 18:04:27.595072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:46.112 [2024-11-26 18:04:27.735951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.112 [2024-11-26 18:04:27.735989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.112 [2024-11-26 18:04:27.736001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.678 18:04:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.678 18:04:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:46.678 18:04:28 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:46.678 I/O targets: 00:19:46.678 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:46.678 00:19:46.678 00:19:46.678 CUnit - A unit testing framework for C - Version 2.1-3 00:19:46.678 http://cunit.sourceforge.net/ 00:19:46.678 00:19:46.678 00:19:46.678 Suite: bdevio tests on: raid5f 00:19:46.678 Test: blockdev write read block ...passed 00:19:46.678 Test: blockdev write zeroes read block ...passed 00:19:46.678 Test: blockdev write zeroes read no split ...passed 00:19:46.937 Test: blockdev write zeroes read split ...passed 00:19:46.937 Test: blockdev write zeroes read split partial ...passed 00:19:46.937 Test: blockdev reset ...passed 00:19:46.937 Test: blockdev write read 8 blocks ...passed 00:19:46.937 Test: blockdev write read size > 128k ...passed 00:19:46.937 Test: blockdev write read invalid size ...passed 00:19:46.938 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:46.938 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:46.938 Test: blockdev write read max offset ...passed 00:19:46.938 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:46.938 Test: blockdev writev readv 8 blocks ...passed 00:19:46.938 Test: blockdev writev readv 30 x 1block ...passed 00:19:46.938 Test: blockdev writev readv block ...passed 00:19:46.938 Test: blockdev writev readv size > 128k ...passed 00:19:46.938 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:46.938 Test: blockdev comparev and writev ...passed 00:19:46.938 Test: blockdev nvme passthru rw ...passed 00:19:46.938 Test: blockdev nvme passthru vendor specific ...passed 00:19:46.938 Test: blockdev nvme admin passthru ...passed 00:19:46.938 Test: blockdev copy ...passed 00:19:46.938 00:19:46.938 Run Summary: Type Total Ran Passed Failed Inactive 00:19:46.938 suites 1 1 n/a 0 0 00:19:46.938 tests 23 23 23 0 0 00:19:46.938 asserts 130 130 130 0 n/a 00:19:46.938 00:19:46.938 Elapsed time = 0.738 seconds 00:19:46.938 0 00:19:47.197 18:04:28 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90591 00:19:47.197 18:04:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90591 ']' 00:19:47.197 18:04:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90591 00:19:47.197 18:04:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:47.197 18:04:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.197 18:04:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90591 00:19:47.197 18:04:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:47.197 18:04:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:47.197 18:04:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90591' 00:19:47.197 killing process with pid 90591 00:19:47.197 18:04:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90591 00:19:47.197 18:04:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90591 00:19:49.148 18:04:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:49.148 00:19:49.148 real 0m3.154s 00:19:49.148 user 0m7.961s 00:19:49.148 sys 0m0.366s 00:19:49.148 18:04:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:49.148 ************************************ 00:19:49.148 END TEST bdev_bounds 00:19:49.148 ************************************ 00:19:49.148 18:04:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:49.148 18:04:30 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:49.148 18:04:30 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:49.148 18:04:30 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:49.148 18:04:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:49.148 ************************************ 00:19:49.148 START TEST bdev_nbd 00:19:49.148 ************************************ 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90656 00:19:49.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90656 /var/tmp/spdk-nbd.sock 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90656 ']' 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.148 18:04:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:49.148 [2024-11-26 18:04:30.645769] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:19:49.148 [2024-11-26 18:04:30.645983] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.148 [2024-11-26 18:04:30.809679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.148 [2024-11-26 18:04:30.941554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.717 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.718 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:49.718 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:49.718 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:49.718 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:49.718 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:49.718 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:49.718 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:49.718 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:49.718 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:49.718 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:49.718 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:49.718 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:49.977 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:49.977 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:50.237 1+0 records in 00:19:50.237 1+0 records out 00:19:50.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551333 s, 7.4 MB/s 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:50.237 18:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:50.497 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:50.497 { 00:19:50.497 "nbd_device": "/dev/nbd0", 00:19:50.497 "bdev_name": "raid5f" 00:19:50.497 } 00:19:50.497 ]' 00:19:50.497 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:50.497 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:50.497 { 00:19:50.497 "nbd_device": "/dev/nbd0", 00:19:50.497 "bdev_name": "raid5f" 00:19:50.497 } 00:19:50.497 ]' 00:19:50.497 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:50.497 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:50.497 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.497 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:50.497 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:50.497 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:50.497 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:50.497 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:50.756 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:50.756 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:50.756 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:50.756 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:50.756 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:50.756 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:50.756 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:50.756 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:50.756 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:50.756 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.756 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:51.015 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:51.275 /dev/nbd0 00:19:51.275 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:51.275 18:04:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:51.275 18:04:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:51.275 18:04:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:51.275 18:04:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:51.275 18:04:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:51.275 18:04:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:51.275 18:04:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:51.275 18:04:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:51.275 18:04:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:51.275 18:04:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:51.275 1+0 records in 00:19:51.275 1+0 records out 00:19:51.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626017 s, 6.5 MB/s 00:19:51.275 18:04:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.275 18:04:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:51.275 18:04:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.275 18:04:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:51.275 18:04:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:51.275 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:51.275 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:51.275 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:51.275 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.275 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:51.535 { 00:19:51.535 "nbd_device": "/dev/nbd0", 00:19:51.535 "bdev_name": "raid5f" 00:19:51.535 } 00:19:51.535 ]' 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:51.535 { 00:19:51.535 "nbd_device": "/dev/nbd0", 00:19:51.535 "bdev_name": "raid5f" 00:19:51.535 } 00:19:51.535 ]' 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:51.535 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:51.535 256+0 records in 00:19:51.535 256+0 records out 00:19:51.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125301 s, 83.7 MB/s 00:19:51.536 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:51.536 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:51.536 256+0 records in 00:19:51.536 256+0 records out 00:19:51.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312009 s, 33.6 MB/s 00:19:51.536 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:51.536 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:51.536 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:51.536 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:51.536 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:51.536 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:51.536 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:51.536 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:51.536 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:51.795 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:51.795 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:51.795 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.795 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:51.795 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:51.795 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:51.795 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:51.795 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:52.055 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:52.055 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:52.055 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:52.055 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:52.055 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:52.055 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:52.055 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:52.055 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:52.055 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:52.055 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:52.055 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:52.314 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:52.314 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:52.314 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:52.314 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:52.314 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:52.314 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:52.314 18:04:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:52.314 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:52.314 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:52.314 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:52.314 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:52.314 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:52.314 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:52.314 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:52.314 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:52.314 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:52.574 malloc_lvol_verify 00:19:52.574 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:52.834 4f208e89-e2fd-4a9b-9192-c10ac412d6ff 00:19:52.834 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:53.093 1cf5e12f-aeea-4c05-9e09-bea350e206c9 00:19:53.093 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:53.093 /dev/nbd0 00:19:53.353 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:53.353 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:53.353 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:53.353 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:53.353 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:53.353 mke2fs 1.47.0 (5-Feb-2023) 00:19:53.353 Discarding device blocks: 0/4096 done 00:19:53.353 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:53.353 00:19:53.353 Allocating group tables: 0/1 done 00:19:53.353 Writing inode tables: 0/1 done 00:19:53.353 Creating journal (1024 blocks): done 00:19:53.353 Writing superblocks and filesystem accounting information: 0/1 done 00:19:53.353 00:19:53.353 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:53.353 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:53.353 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:53.353 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:53.353 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:53.353 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:53.353 18:04:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90656 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90656 ']' 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90656 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90656 00:19:53.613 killing process with pid 90656 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90656' 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90656 00:19:53.613 18:04:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90656 00:19:55.521 18:04:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:55.521 00:19:55.521 real 0m6.446s 00:19:55.521 user 0m8.914s 00:19:55.521 sys 0m1.349s 00:19:55.521 ************************************ 00:19:55.521 END TEST bdev_nbd 00:19:55.521 ************************************ 00:19:55.521 18:04:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.521 18:04:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:55.521 18:04:37 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:55.521 18:04:37 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:19:55.521 18:04:37 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:19:55.521 18:04:37 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:55.521 18:04:37 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:55.521 18:04:37 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.521 18:04:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:55.521 ************************************ 00:19:55.521 START TEST bdev_fio 00:19:55.521 ************************************ 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:55.521 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.521 18:04:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:55.522 ************************************ 00:19:55.522 START TEST bdev_fio_rw_verify 00:19:55.522 ************************************ 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:55.522 18:04:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:55.782 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:55.782 fio-3.35 00:19:55.782 Starting 1 thread 00:20:07.992 00:20:07.992 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90858: Tue Nov 26 18:04:48 2024 00:20:07.992 read: IOPS=8755, BW=34.2MiB/s (35.9MB/s)(342MiB/10001msec) 00:20:07.992 slat (nsec): min=19531, max=72064, avg=27937.50, stdev=3328.27 00:20:07.992 clat (usec): min=12, max=572, avg=182.77, stdev=67.46 00:20:07.992 lat (usec): min=38, max=614, avg=210.71, stdev=68.08 00:20:07.992 clat percentiles (usec): 00:20:07.992 | 50.000th=[ 180], 99.000th=[ 310], 99.900th=[ 363], 99.990th=[ 408], 00:20:07.992 | 99.999th=[ 570] 00:20:07.992 write: IOPS=9152, BW=35.8MiB/s (37.5MB/s)(353MiB/9875msec); 0 zone resets 00:20:07.992 slat (usec): min=9, max=276, avg=23.20, stdev= 5.10 00:20:07.992 clat (usec): min=72, max=1286, avg=416.04, stdev=59.94 00:20:07.992 lat (usec): min=92, max=1562, avg=439.24, stdev=61.51 00:20:07.992 clat percentiles (usec): 00:20:07.992 | 50.000th=[ 420], 99.000th=[ 537], 99.900th=[ 693], 99.990th=[ 963], 00:20:07.992 | 99.999th=[ 1287] 00:20:07.992 bw ( KiB/s): min=29848, max=39600, per=98.97%, avg=36233.68, stdev=2326.41, samples=19 00:20:07.992 iops : min= 7462, max= 9900, avg=9058.42, stdev=581.60, samples=19 00:20:07.992 lat (usec) : 20=0.01%, 50=0.01%, 100=7.16%, 250=32.94%, 500=56.64% 00:20:07.992 lat (usec) : 750=3.23%, 1000=0.03% 00:20:07.992 lat (msec) : 2=0.01% 00:20:07.992 cpu : usr=98.98%, sys=0.32%, ctx=24, majf=0, minf=7546 00:20:07.992 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:07.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.992 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.992 issued rwts: total=87559,90384,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:07.992 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:07.992 00:20:07.992 Run status group 0 (all jobs): 00:20:07.992 READ: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=342MiB (359MB), run=10001-10001msec 00:20:07.992 WRITE: bw=35.8MiB/s (37.5MB/s), 35.8MiB/s-35.8MiB/s (37.5MB/s-37.5MB/s), io=353MiB (370MB), run=9875-9875msec 00:20:08.562 ----------------------------------------------------- 00:20:08.562 Suppressions used: 00:20:08.562 count bytes template 00:20:08.562 1 7 /usr/src/fio/parse.c 00:20:08.562 119 11424 /usr/src/fio/iolog.c 00:20:08.563 1 8 libtcmalloc_minimal.so 00:20:08.563 1 904 libcrypto.so 00:20:08.563 ----------------------------------------------------- 00:20:08.563 00:20:08.563 00:20:08.563 real 0m13.056s 00:20:08.563 user 0m13.282s 00:20:08.563 sys 0m0.683s 00:20:08.563 ************************************ 00:20:08.563 END TEST bdev_fio_rw_verify 00:20:08.563 ************************************ 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "42f3378a-067d-4482-961e-b004af1ef062"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "42f3378a-067d-4482-961e-b004af1ef062",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "42f3378a-067d-4482-961e-b004af1ef062",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "821b454a-e548-4c45-8202-bf8cb1112e16",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "21c3f391-3cb2-4408-be36-6ab9467ff3fe",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "8f154073-9f89-4129-96bc-05f85c9baede",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:08.563 /home/vagrant/spdk_repo/spdk 00:20:08.563 ************************************ 00:20:08.563 END TEST bdev_fio 00:20:08.563 ************************************ 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:08.563 00:20:08.563 real 0m13.321s 00:20:08.563 user 0m13.400s 00:20:08.563 sys 0m0.804s 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.563 18:04:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:08.823 18:04:50 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:08.823 18:04:50 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:08.823 18:04:50 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:08.823 18:04:50 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.823 18:04:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:08.823 ************************************ 00:20:08.823 START TEST bdev_verify 00:20:08.823 ************************************ 00:20:08.823 18:04:50 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:08.823 [2024-11-26 18:04:50.540880] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:20:08.823 [2024-11-26 18:04:50.541128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91026 ] 00:20:09.082 [2024-11-26 18:04:50.718912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:09.082 [2024-11-26 18:04:50.872828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.082 [2024-11-26 18:04:50.872866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.649 Running I/O for 5 seconds... 00:20:11.971 10392.00 IOPS, 40.59 MiB/s [2024-11-26T18:04:54.770Z] 11685.50 IOPS, 45.65 MiB/s [2024-11-26T18:04:55.703Z] 12124.67 IOPS, 47.36 MiB/s [2024-11-26T18:04:56.638Z] 12511.75 IOPS, 48.87 MiB/s [2024-11-26T18:04:56.638Z] 12726.00 IOPS, 49.71 MiB/s 00:20:14.775 Latency(us) 00:20:14.775 [2024-11-26T18:04:56.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.775 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:14.775 Verification LBA range: start 0x0 length 0x2000 00:20:14.775 raid5f : 5.02 6351.54 24.81 0.00 0.00 30122.14 270.09 29992.02 00:20:14.775 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:14.775 Verification LBA range: start 0x2000 length 0x2000 00:20:14.775 raid5f : 5.01 6358.86 24.84 0.00 0.00 30313.55 257.57 30449.91 00:20:14.775 [2024-11-26T18:04:56.638Z] =================================================================================================================== 00:20:14.775 [2024-11-26T18:04:56.638Z] Total : 12710.40 49.65 0.00 0.00 30217.84 257.57 30449.91 00:20:16.677 ************************************ 00:20:16.677 END TEST bdev_verify 00:20:16.677 ************************************ 00:20:16.677 00:20:16.677 real 0m7.704s 00:20:16.677 user 0m14.168s 00:20:16.677 sys 0m0.310s 00:20:16.677 18:04:58 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.677 18:04:58 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:16.677 18:04:58 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:16.677 18:04:58 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:16.677 18:04:58 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.677 18:04:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:16.677 ************************************ 00:20:16.677 START TEST bdev_verify_big_io 00:20:16.677 ************************************ 00:20:16.677 18:04:58 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:16.677 [2024-11-26 18:04:58.307667] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:20:16.677 [2024-11-26 18:04:58.307805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91128 ] 00:20:16.677 [2024-11-26 18:04:58.489120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:16.935 [2024-11-26 18:04:58.629352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.935 [2024-11-26 18:04:58.629386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.507 Running I/O for 5 seconds... 00:20:19.821 506.00 IOPS, 31.62 MiB/s [2024-11-26T18:05:02.623Z] 634.00 IOPS, 39.62 MiB/s [2024-11-26T18:05:03.560Z] 676.67 IOPS, 42.29 MiB/s [2024-11-26T18:05:04.595Z] 697.50 IOPS, 43.59 MiB/s [2024-11-26T18:05:04.855Z] 684.80 IOPS, 42.80 MiB/s 00:20:22.992 Latency(us) 00:20:22.992 [2024-11-26T18:05:04.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.992 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:22.992 Verification LBA range: start 0x0 length 0x200 00:20:22.992 raid5f : 5.36 354.84 22.18 0.00 0.00 8872449.74 357.73 375472.63 00:20:22.992 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:22.992 Verification LBA range: start 0x200 length 0x200 00:20:22.992 raid5f : 5.37 354.29 22.14 0.00 0.00 8940851.27 188.70 379135.78 00:20:22.992 [2024-11-26T18:05:04.855Z] =================================================================================================================== 00:20:22.992 [2024-11-26T18:05:04.855Z] Total : 709.13 44.32 0.00 0.00 8906650.50 188.70 379135.78 00:20:24.898 00:20:24.898 real 0m8.133s 00:20:24.898 user 0m15.041s 00:20:24.898 sys 0m0.285s 00:20:24.898 ************************************ 00:20:24.898 END TEST bdev_verify_big_io 00:20:24.898 ************************************ 00:20:24.898 18:05:06 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.898 18:05:06 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.898 18:05:06 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:24.898 18:05:06 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:24.898 18:05:06 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:24.898 18:05:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:24.898 ************************************ 00:20:24.898 START TEST bdev_write_zeroes 00:20:24.898 ************************************ 00:20:24.898 18:05:06 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:24.898 [2024-11-26 18:05:06.503185] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:20:24.898 [2024-11-26 18:05:06.503321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91232 ] 00:20:24.898 [2024-11-26 18:05:06.681793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.158 [2024-11-26 18:05:06.810417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.725 Running I/O for 1 seconds... 00:20:26.663 20127.00 IOPS, 78.62 MiB/s 00:20:26.663 Latency(us) 00:20:26.663 [2024-11-26T18:05:08.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.663 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:26.663 raid5f : 1.01 20079.33 78.43 0.00 0.00 6348.62 1903.12 9329.58 00:20:26.663 [2024-11-26T18:05:08.526Z] =================================================================================================================== 00:20:26.663 [2024-11-26T18:05:08.526Z] Total : 20079.33 78.43 0.00 0.00 6348.62 1903.12 9329.58 00:20:28.585 00:20:28.585 real 0m3.674s 00:20:28.585 user 0m3.280s 00:20:28.585 sys 0m0.261s 00:20:28.585 18:05:10 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.585 18:05:10 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:28.585 ************************************ 00:20:28.585 END TEST bdev_write_zeroes 00:20:28.585 ************************************ 00:20:28.585 18:05:10 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:28.585 18:05:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:28.585 18:05:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.585 18:05:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:28.585 ************************************ 00:20:28.585 START TEST bdev_json_nonenclosed 00:20:28.585 ************************************ 00:20:28.585 18:05:10 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:28.585 [2024-11-26 18:05:10.256967] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:20:28.585 [2024-11-26 18:05:10.257289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91285 ] 00:20:28.585 [2024-11-26 18:05:10.440583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.844 [2024-11-26 18:05:10.577264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.844 [2024-11-26 18:05:10.577367] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:28.844 [2024-11-26 18:05:10.577398] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:28.844 [2024-11-26 18:05:10.577409] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:29.102 00:20:29.102 real 0m0.721s 00:20:29.102 user 0m0.481s 00:20:29.102 sys 0m0.134s 00:20:29.103 18:05:10 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.103 ************************************ 00:20:29.103 END TEST bdev_json_nonenclosed 00:20:29.103 ************************************ 00:20:29.103 18:05:10 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:29.103 18:05:10 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:29.103 18:05:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:29.103 18:05:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.103 18:05:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:29.103 ************************************ 00:20:29.103 START TEST bdev_json_nonarray 00:20:29.103 ************************************ 00:20:29.103 18:05:10 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:29.362 [2024-11-26 18:05:11.026083] Starting SPDK v25.01-pre git sha1 9f3071c5f / DPDK 24.03.0 initialization... 00:20:29.362 [2024-11-26 18:05:11.026213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91316 ] 00:20:29.362 [2024-11-26 18:05:11.200966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.621 [2024-11-26 18:05:11.327919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.621 [2024-11-26 18:05:11.328158] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:29.621 [2024-11-26 18:05:11.328189] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:29.621 [2024-11-26 18:05:11.328214] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:29.881 00:20:29.881 real 0m0.676s 00:20:29.881 user 0m0.452s 00:20:29.881 sys 0m0.118s 00:20:29.881 ************************************ 00:20:29.881 END TEST bdev_json_nonarray 00:20:29.881 ************************************ 00:20:29.881 18:05:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.881 18:05:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:29.881 18:05:11 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:20:29.881 18:05:11 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:20:29.881 18:05:11 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:20:29.881 18:05:11 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:29.881 18:05:11 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:20:29.881 18:05:11 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:29.881 18:05:11 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:29.881 18:05:11 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:29.881 18:05:11 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:29.881 18:05:11 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:29.881 18:05:11 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:29.881 00:20:29.881 real 0m52.120s 00:20:29.881 user 1m11.079s 00:20:29.881 sys 0m5.000s 00:20:29.881 18:05:11 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.881 ************************************ 00:20:29.881 END TEST blockdev_raid5f 00:20:29.881 ************************************ 00:20:29.881 18:05:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:29.881 18:05:11 -- spdk/autotest.sh@194 -- # uname -s 00:20:29.881 18:05:11 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:29.881 18:05:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:29.881 18:05:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:29.881 18:05:11 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:29.881 18:05:11 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:29.881 18:05:11 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:29.881 18:05:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:29.881 18:05:11 -- common/autotest_common.sh@10 -- # set +x 00:20:30.138 18:05:11 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:30.138 18:05:11 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:30.138 18:05:11 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:30.138 18:05:11 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:30.138 18:05:11 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:30.138 18:05:11 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:30.138 18:05:11 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:30.138 18:05:11 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:30.139 18:05:11 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:30.139 18:05:11 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:30.139 18:05:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:30.139 18:05:11 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:30.139 18:05:11 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:30.139 18:05:11 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:30.139 18:05:11 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:30.139 18:05:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:30.139 18:05:11 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:30.139 18:05:11 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:30.139 18:05:11 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:30.139 18:05:11 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:30.139 18:05:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.139 18:05:11 -- common/autotest_common.sh@10 -- # set +x 00:20:30.139 18:05:11 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:30.139 18:05:11 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:30.139 18:05:11 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:30.139 18:05:11 -- common/autotest_common.sh@10 -- # set +x 00:20:32.081 INFO: APP EXITING 00:20:32.081 INFO: killing all VMs 00:20:32.081 INFO: killing vhost app 00:20:32.081 INFO: EXIT DONE 00:20:32.340 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:32.340 Waiting for block devices as requested 00:20:32.340 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:32.599 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:33.537 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:33.537 Cleaning 00:20:33.537 Removing: /var/run/dpdk/spdk0/config 00:20:33.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:33.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:33.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:33.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:33.537 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:33.537 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:33.537 Removing: /dev/shm/spdk_tgt_trace.pid56947 00:20:33.537 Removing: /var/run/dpdk/spdk0 00:20:33.537 Removing: /var/run/dpdk/spdk_pid56706 00:20:33.537 Removing: /var/run/dpdk/spdk_pid56947 00:20:33.537 Removing: /var/run/dpdk/spdk_pid57187 00:20:33.537 Removing: /var/run/dpdk/spdk_pid57291 00:20:33.537 Removing: /var/run/dpdk/spdk_pid57358 00:20:33.537 Removing: /var/run/dpdk/spdk_pid57498 00:20:33.537 Removing: /var/run/dpdk/spdk_pid57516 00:20:33.537 Removing: /var/run/dpdk/spdk_pid57737 00:20:33.538 Removing: /var/run/dpdk/spdk_pid57855 00:20:33.538 Removing: /var/run/dpdk/spdk_pid57979 00:20:33.538 Removing: /var/run/dpdk/spdk_pid58112 00:20:33.538 Removing: /var/run/dpdk/spdk_pid58220 00:20:33.538 Removing: /var/run/dpdk/spdk_pid58262 00:20:33.538 Removing: /var/run/dpdk/spdk_pid58296 00:20:33.538 Removing: /var/run/dpdk/spdk_pid58372 00:20:33.538 Removing: /var/run/dpdk/spdk_pid58500 00:20:33.538 Removing: /var/run/dpdk/spdk_pid58955 00:20:33.538 Removing: /var/run/dpdk/spdk_pid59030 00:20:33.538 Removing: /var/run/dpdk/spdk_pid59114 00:20:33.538 Removing: /var/run/dpdk/spdk_pid59131 00:20:33.538 Removing: /var/run/dpdk/spdk_pid59288 00:20:33.538 Removing: /var/run/dpdk/spdk_pid59309 00:20:33.538 Removing: /var/run/dpdk/spdk_pid59463 00:20:33.538 Removing: /var/run/dpdk/spdk_pid59490 00:20:33.538 Removing: /var/run/dpdk/spdk_pid59557 00:20:33.538 Removing: /var/run/dpdk/spdk_pid59586 00:20:33.538 Removing: /var/run/dpdk/spdk_pid59650 00:20:33.538 Removing: /var/run/dpdk/spdk_pid59674 00:20:33.538 Removing: /var/run/dpdk/spdk_pid59869 00:20:33.538 Removing: /var/run/dpdk/spdk_pid59911 00:20:33.538 Removing: /var/run/dpdk/spdk_pid59994 00:20:33.538 Removing: /var/run/dpdk/spdk_pid61382 00:20:33.538 Removing: /var/run/dpdk/spdk_pid61593 00:20:33.538 Removing: /var/run/dpdk/spdk_pid61739 00:20:33.538 Removing: /var/run/dpdk/spdk_pid62382 00:20:33.538 Removing: /var/run/dpdk/spdk_pid62594 00:20:33.538 Removing: /var/run/dpdk/spdk_pid62734 00:20:33.538 Removing: /var/run/dpdk/spdk_pid63383 00:20:33.538 Removing: /var/run/dpdk/spdk_pid63713 00:20:33.538 Removing: /var/run/dpdk/spdk_pid63864 00:20:33.538 Removing: /var/run/dpdk/spdk_pid65266 00:20:33.538 Removing: /var/run/dpdk/spdk_pid65519 00:20:33.538 Removing: /var/run/dpdk/spdk_pid65670 00:20:33.538 Removing: /var/run/dpdk/spdk_pid67061 00:20:33.538 Removing: /var/run/dpdk/spdk_pid67325 00:20:33.538 Removing: /var/run/dpdk/spdk_pid67471 00:20:33.538 Removing: /var/run/dpdk/spdk_pid68873 00:20:33.538 Removing: /var/run/dpdk/spdk_pid69324 00:20:33.538 Removing: /var/run/dpdk/spdk_pid69470 00:20:33.538 Removing: /var/run/dpdk/spdk_pid70982 00:20:33.538 Removing: /var/run/dpdk/spdk_pid71251 00:20:33.538 Removing: /var/run/dpdk/spdk_pid71402 00:20:33.538 Removing: /var/run/dpdk/spdk_pid72911 00:20:33.538 Removing: /var/run/dpdk/spdk_pid73180 00:20:33.538 Removing: /var/run/dpdk/spdk_pid73331 00:20:33.538 Removing: /var/run/dpdk/spdk_pid74839 00:20:33.796 Removing: /var/run/dpdk/spdk_pid75334 00:20:33.796 Removing: /var/run/dpdk/spdk_pid75485 00:20:33.796 Removing: /var/run/dpdk/spdk_pid75634 00:20:33.796 Removing: /var/run/dpdk/spdk_pid76069 00:20:33.796 Removing: /var/run/dpdk/spdk_pid76817 00:20:33.796 Removing: /var/run/dpdk/spdk_pid77193 00:20:33.796 Removing: /var/run/dpdk/spdk_pid77895 00:20:33.796 Removing: /var/run/dpdk/spdk_pid78351 00:20:33.796 Removing: /var/run/dpdk/spdk_pid79123 00:20:33.796 Removing: /var/run/dpdk/spdk_pid79536 00:20:33.796 Removing: /var/run/dpdk/spdk_pid81523 00:20:33.796 Removing: /var/run/dpdk/spdk_pid81969 00:20:33.796 Removing: /var/run/dpdk/spdk_pid82423 00:20:33.796 Removing: /var/run/dpdk/spdk_pid84531 00:20:33.796 Removing: /var/run/dpdk/spdk_pid85022 00:20:33.796 Removing: /var/run/dpdk/spdk_pid85545 00:20:33.796 Removing: /var/run/dpdk/spdk_pid86611 00:20:33.796 Removing: /var/run/dpdk/spdk_pid86941 00:20:33.796 Removing: /var/run/dpdk/spdk_pid87891 00:20:33.796 Removing: /var/run/dpdk/spdk_pid88225 00:20:33.796 Removing: /var/run/dpdk/spdk_pid89169 00:20:33.796 Removing: /var/run/dpdk/spdk_pid89502 00:20:33.796 Removing: /var/run/dpdk/spdk_pid90184 00:20:33.796 Removing: /var/run/dpdk/spdk_pid90470 00:20:33.796 Removing: /var/run/dpdk/spdk_pid90543 00:20:33.796 Removing: /var/run/dpdk/spdk_pid90591 00:20:33.796 Removing: /var/run/dpdk/spdk_pid90843 00:20:33.796 Removing: /var/run/dpdk/spdk_pid91026 00:20:33.796 Removing: /var/run/dpdk/spdk_pid91128 00:20:33.796 Removing: /var/run/dpdk/spdk_pid91232 00:20:33.796 Removing: /var/run/dpdk/spdk_pid91285 00:20:33.796 Removing: /var/run/dpdk/spdk_pid91316 00:20:33.796 Clean 00:20:33.796 18:05:15 -- common/autotest_common.sh@1453 -- # return 0 00:20:33.796 18:05:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:33.796 18:05:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:33.796 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:20:33.796 18:05:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:33.796 18:05:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:33.796 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:20:34.055 18:05:15 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:34.055 18:05:15 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:34.055 18:05:15 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:34.055 18:05:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:34.055 18:05:15 -- spdk/autotest.sh@398 -- # hostname 00:20:34.055 18:05:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:34.055 geninfo: WARNING: invalid characters removed from testname! 00:21:00.622 18:05:40 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:02.556 18:05:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:05.091 18:05:46 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:07.627 18:05:48 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:09.532 18:05:51 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:12.064 18:05:53 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:14.606 18:05:56 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:14.606 18:05:56 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:14.606 18:05:56 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:14.606 18:05:56 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:14.606 18:05:56 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:14.606 18:05:56 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:14.606 + [[ -n 5423 ]] 00:21:14.606 + sudo kill 5423 00:21:14.616 [Pipeline] } 00:21:14.635 [Pipeline] // timeout 00:21:14.640 [Pipeline] } 00:21:14.655 [Pipeline] // stage 00:21:14.661 [Pipeline] } 00:21:14.675 [Pipeline] // catchError 00:21:14.686 [Pipeline] stage 00:21:14.688 [Pipeline] { (Stop VM) 00:21:14.702 [Pipeline] sh 00:21:14.981 + vagrant halt 00:21:18.266 ==> default: Halting domain... 00:21:24.840 [Pipeline] sh 00:21:25.124 + vagrant destroy -f 00:21:28.414 ==> default: Removing domain... 00:21:28.428 [Pipeline] sh 00:21:28.951 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:21:28.961 [Pipeline] } 00:21:28.978 [Pipeline] // stage 00:21:28.986 [Pipeline] } 00:21:29.001 [Pipeline] // dir 00:21:29.008 [Pipeline] } 00:21:29.027 [Pipeline] // wrap 00:21:29.036 [Pipeline] } 00:21:29.051 [Pipeline] // catchError 00:21:29.063 [Pipeline] stage 00:21:29.066 [Pipeline] { (Epilogue) 00:21:29.081 [Pipeline] sh 00:21:29.365 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:35.936 [Pipeline] catchError 00:21:35.938 [Pipeline] { 00:21:35.950 [Pipeline] sh 00:21:36.231 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:36.498 Artifacts sizes are good 00:21:36.529 [Pipeline] } 00:21:36.542 [Pipeline] // catchError 00:21:36.549 [Pipeline] archiveArtifacts 00:21:36.553 Archiving artifacts 00:21:36.693 [Pipeline] cleanWs 00:21:36.705 [WS-CLEANUP] Deleting project workspace... 00:21:36.705 [WS-CLEANUP] Deferred wipeout is used... 00:21:36.715 [WS-CLEANUP] done 00:21:36.717 [Pipeline] } 00:21:36.731 [Pipeline] // stage 00:21:36.736 [Pipeline] } 00:21:36.749 [Pipeline] // node 00:21:36.753 [Pipeline] End of Pipeline 00:21:36.785 Finished: SUCCESS